You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, fantastic work with NATTEN. Is support for XLA on your roadmap? It would be great to enable neighborhood attention on TPUs and other non-Nvidia GPUs.
The text was updated successfully, but these errors were encountered:
We definitely foresee expanding NATTEN to more backends and frameworks, and recent changes to the API reduce the dependency on torch to ease that process.
While our top priority at the moment is feature completeness for our CUDA backend, we definitely will consider other backends directly first (we would attempt to port or rewrite our kernels for other accelerators such as Apple Silicon and AMD GPUs), but depending on demand, we might consider higher-level platforms such as XLA and Triton.
Hi, fantastic work with NATTEN. Is support for XLA on your roadmap? It would be great to enable neighborhood attention on TPUs and other non-Nvidia GPUs.
The text was updated successfully, but these errors were encountered: