diff --git a/website/blog/2024-01-24-running-tabby-locally-with-rocm/index.md b/website/blog/2024-01-24-running-tabby-locally-with-rocm/index.md index 8e1585cc64bc..4b07b8572ddb 100644 --- a/website/blog/2024-01-24-running-tabby-locally-with-rocm/index.md +++ b/website/blog/2024-01-24-running-tabby-locally-with-rocm/index.md @@ -1,10 +1,15 @@ --- -slug: running-tabby-locally-with-rocm.md title: Running Tabby Locally with AMD ROCm authors: [boxbeam] tags: [deployment] --- +:::warning + +Tabby's ROCm support is currently only in our [nightly builds](https://github.com/TabbyML/tabby/releases/tag/nightly). It will become stable in version 0.8. + +::: + For those using (compatible) **AMD** graphics cards, you can now run Tabby locally with GPU acceleration using AMD's ROCm toolkit! 🎉 ROCm is AMD's equivalent of NVidia's CUDA library, making it possible to run highly parallelized computations on the GPU. Cuda is open source and supports using multiple GPUs at the same time to perform the same computation.