-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
docs(lm-studio): add documentation for LM Studio and its API usage
- Loading branch information
Showing
1 changed file
with
51 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,51 @@ | ||
# LM Studio | ||
|
||
[LM Studio](https://lmstudio.ai/) is a desktop application that allows you to discover, download, and run local LLMs using various model formats (GGUF, GGML, SafeTensors). It provides an OpenAI-compatible API server for running these models locally. | ||
|
||
## Chat model | ||
|
||
LM Studio provides an OpenAI-compatible chat API interface that can be used with Tabby. | ||
|
||
```toml title="~/.tabby/config.toml" | ||
[model.chat.http] | ||
kind = "openai/chat" | ||
model_name = "deepseek-r1-distill-qwen-7b" # Example model | ||
api_endpoint = "http://localhost:1234/v1" # LM Studio server endpoint with /v1 path | ||
api_key = "" # No API key required for local deployment | ||
``` | ||
|
||
## Completion model | ||
|
||
LM Studio can be used for code completion tasks through its OpenAI-compatible completion API. | ||
|
||
```toml title="~/.tabby/config.toml" | ||
[model.completion.http] | ||
kind = "openai/completion" | ||
model_name = "starcoder2-7b" # Example code completion model | ||
api_endpoint = "http://localhost:1234/v1" | ||
api_key = "" | ||
prompt_template = "<PRE> {prefix} <SUF>{suffix} <MID>" # Example prompt template for CodeLlama models | ||
``` | ||
|
||
## Embeddings model | ||
|
||
LM Studio supports embedding functionality through its OpenAI-compatible API. | ||
|
||
```toml title="~/.tabby/config.toml" | ||
[model.embedding.http] | ||
kind = "openai/embedding" | ||
model_name = "text-embedding-nomic-embed-text-v1.5" | ||
api_endpoint = "http://localhost:1234/v1" | ||
api_key = "" | ||
``` | ||
|
||
## Usage Notes | ||
|
||
1. Download and install LM Studio from their [official website](https://lmstudio.ai/). | ||
2. Download your preferred model through LM Studio's model discovery interface. | ||
3. Start the local server by clicking the "Start Server" button in LM Studio. | ||
4. Configure Tabby to use LM Studio's API endpoint as shown in the examples above. | ||
5. The default server port is 1234, but you can change it in LM Studio's settings if needed. | ||
6. Make sure to append `/v1` to the API endpoint as LM Studio follows OpenAI's API structure. | ||
|
||
LM Studio is particularly useful for running models locally without requiring complex setup or command-line knowledge. It supports a wide range of models and provides a user-friendly interface for model management and server operations. |