You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
NATTEN kernels are almost entirely independent of PyTorch, and the backend API is specifically designed to avoid using torch's tensor API as to make binding to other frameworks/engines easier.
However, my understanding is that there is "partial" support for binding custom ops in TRT, and it's only limited to inference, so unless a model that depends on NATTEN is going to be served with TRT, I think it should be possible.
No description provided.
The text was updated successfully, but these errors were encountered: