Skip to content

Commit

Permalink
[Update] Deploy an AI Chatbot and RAG Pipeline for Inferencing on LKE (
Browse files Browse the repository at this point in the history
…#7195)

* Fixed heading issue on ai chatbot guide

* Update model used to generate embeddings

* Updated image, llama3 model, command to check gpu pods, and CrashLoopBackOff error

* Updated modified date

* Additional updates
  • Loading branch information
wildmanonline authored Feb 13, 2025
1 parent 6d01806 commit e6bfa65
Show file tree
Hide file tree
Showing 5 changed files with 27 additions and 18 deletions.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ description: "Utilize the Retrieval-Augmented Generation technique to supplement
authors: ["Linode"]
contributors: ["Linode"]
published: 2025-02-11
modified: 2025-02-13
keywords: ['kubernetes','lke','ai','inferencing','rag','chatbot','architecture']
tags: ["kubernetes","lke"]
license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
Expand All @@ -14,7 +15,9 @@ license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'

LLMs (Large Language Models) are increasingly used to power chatbots or other knowledge assistants. While these models are pre-trained on vast swaths of information, they are not trained on your own private data or knowledge base. To overcome this, you need to provide this data to the LLM (a process called context augmentation). This tutorial showcases a particular method of context augmentation called Retrieval-Augmented Generation (RAG), which indexes your data and attaches relevant data as context when users sends the LLM queries.

Follow this tutorial to deploy a RAG pipeline on Akamai’s LKE service using our latest GPU instances. Once deployed, you will have a web chatbot that can respond to queries using data from your own custom data source.
Follow this tutorial to deploy a RAG pipeline on Akamai’s LKE service using our latest GPU instances. Once deployed, you will have a web chatbot that can respond to queries using data from your own custom data source like shown in the screenshot below.

![Screenshot of the Open WebUI query interface](open-webui-interface.jpg)

## Diagram

Expand All @@ -31,8 +34,8 @@ Follow this tutorial to deploy a RAG pipeline on Akamai’s LKE service using ou

- **Kubeflow:** This open-source software platform includes a suite of applications that are used for machine learning tasks. It is designed to be run on Kubernetes. While each application can be installed individually, this tutorial installs all default applications and makes specific use of the following:
- **KServe:** Serves machine learning models. This tutorial installs the Llama 3 LLM to KServe, which then serves it to other applications, such as the chatbot UI.
- **Kubeflow Pipeline:** Used to deploy pipelines, reusable machine learning workflows built using the Kubeflow Pipelines SDK. In this tutorial, a pipeline is used to run LlamaIndex to train the LLM with additional data.
- **Meta’s Llama 3 LLM:** We use Llama 3 as the LLM, along with the LlamaIndex tool to capture data from an external source and send embeddings to the Milvus database.
- **Kubeflow Pipeline:** Used to deploy pipelines, reusable machine learning workflows built using the Kubeflow Pipelines SDK. In this tutorial, a pipeline is used to run LlamaIndex to process the dataset and store embeddings.
- **Meta’s Llama 3 LLM:** The [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) model is used as the LLM. You should review and agree to the licensing agreement before deploying.
- **Milvus:** Milvus is an open-source vector database and is used for generative AI workloads. This tutorial uses Milvus to store embeddings generated by LlamaIndex and make them available to queries sent to the Llama 3 LLM.
- **Open WebUI:** This is an self-hosted AI chatbot application that’s compatible with LLMs like Llama 3 and includes a built-in inference engine for RAG solutions. Users interact with this interface to query the LLM. This can be configured to send queries straight to Llama 3 or to first load data from Milvus and send that context along with the query.

Expand All @@ -54,11 +57,11 @@ The configuration instructions in this document are expected to not expose any s
It’s not part of the scope of this document to cover the setup required to secure this configuration for a production deployment.
{{< /note >}}

# Set up infrastructure
## Set up infrastructure

The first step is to provision the infrastructure needed for this tutorial and configure it with kubectl, so that you can manage it locally and install software through helm. As part of this process, we’ll also need to install the NVIDIA GPU operator at this step so that the NVIDIA cards within the GPU worker nodes can be used on Kubernetes.

1. **Provision an LKE cluster.** We recommend using at least two **RTX4000 Ada x2 Medium** GPU plans (plan ID: `g2-gpu-rtx4000a2-m`), though you can adjust this as needed. For reference, Kubeflow recommends 32 GB of RAM and 16 CPU cores. This tutorial has been tested using Kubernetes v1.31, though other versions should also work. To learn more about provisioning a cluster, see the [Create a cluster](https://techdocs.akamai.com/cloud-computing/docs/create-a-cluster) guide.
1. **Provision an LKE cluster.** We recommend using at least 3 **RTX4000 Ada x1 Medium** GPU plans (plan ID: `g2-gpu-rtx4000a1-m`), though you can adjust this as needed. For reference, Kubeflow recommends 32 GB of RAM and 16 CPU cores for just their own application. This tutorial has been tested using Kubernetes v1.31, though other versions should also work. To learn more about provisioning a cluster, see the [Create a cluster](https://techdocs.akamai.com/cloud-computing/docs/create-a-cluster) guide.

{{< note noTitle=true >}}
GPU plans are available in a limited number of data centers. Review the [GPU product documentation](https://techdocs.akamai.com/cloud-computing/docs/gpu-compute-instances#availability) to learn more about availability.
Expand All @@ -77,7 +80,7 @@ The first step is to provision the infrastructure needed for this tutorial and c
You can confirm that the operator has been installed on your cluster by running reviewing your pods. You should see a number of pods in the `gpu-operator` namespace.

```command
kubectl get pods -A
kubectl get pods -n gpu-operator
```

### Deploy Kubeflow
Expand Down Expand Up @@ -114,6 +117,8 @@ Next, let’s deploy Kubeflow on the LKE cluster. These instructions deploy all
kubectl get pods -A
```

You may notice a status of `CrashLoopBackOff` on one or more pods. This can be caused to a temporary issue with a persistent volume attaching to a worker node and should be resolved within a minute or so.

### Install Llama3 LLM on KServe

After Kubeflow has been installed, we can now deploy the Llama 3 LLM to KServe. This tutorial uses HuggingFace (a platform that provides pre-trained AI models) to deploy Llama 3 to the LKE cluster. Specifically, these instructions use the [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) model.
Expand Down Expand Up @@ -152,7 +157,7 @@ After Kubeflow has been installed, we can now deploy the Llama 3 LLM to KServe.
name: huggingface
args:
- --model_name=llama3
- --model_id=NousResearch/Meta-Llama-3-8B-Instruct
- --model_id=meta-llama/meta-llama-3-8b-instruct
- --max-model-len=4096
env:
- name: HF_TOKEN
Expand Down Expand Up @@ -218,7 +223,7 @@ Kubeflow Pipeline pulls together the entire workflow for ingesting data from our

1. Download a zip archive from the specified URL.
1. Uses LlamaIndex to read the Markdown files within the archive.
1. Generate embeddings from the content of those files using the [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) model.
1. Generate embeddings from the content of those files.
1. Store the embeddings within the Milvus database collection.

Keep this workflow in mind when going through the Kubeflow Pipeline set up steps in this section. If you require a different pipeline workflow, you will need to adjust the python file and Kubeflow Pipeline configuration discussed in this section.
Expand All @@ -240,7 +245,7 @@ This tutorial employs a Python script to create the YAML file used within Kubefl
pip install kfp
```

1. Use the following python script to generate a YAML file to use for the Kubeflow Pipeline. This script configures the pipeline to download the Markdown data you wish to ingest, read the content using LlamaIndex, generate embeddings of the content using BAAI general embedding model, and store the embeddings in the Milvus database. Replace values as needed before proceeding.
1. Use the following python script to generate a YAML file to use for the Kubeflow Pipeline. This script configures the pipeline to download the Markdown data you wish to ingest, read the content using LlamaIndex, generate embeddings of the content, and store the embeddings in the Milvus database. Replace values as needed before proceeding.

```file {title="doc-ingest-pipeline.py" lang="python"}
from kfp import dsl
Expand Down Expand Up @@ -269,7 +274,7 @@ This tutorial employs a Python script to create the YAML file used within Kubefl
from llama_index.core import Settings
Settings.embed_model = HuggingFaceEmbedding(
model_name="BAAI/bge-large-en-v1.5"
model_name="sentence-transformers/all-MiniLM-L6-v2"
)
from llama_index.llms.openai_like import OpenAILike
Expand All @@ -285,7 +290,7 @@ This tutorial employs a Python script to create the YAML file used within Kubefl
from llama_index.core import VectorStoreIndex, StorageContext
from llama_index.vector_stores.milvus import MilvusVectorStore
vector_store = MilvusVectorStore(uri="http://my-release-milvus.default.svc.cluster.local:19530", collection=collection, dim=1024, overwrite=True)
vector_store = MilvusVectorStore(uri="http://my-release-milvus.default.svc.cluster.local:19530", collection=collection, dim=384, overwrite=True)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context
Expand Down Expand Up @@ -350,7 +355,7 @@ Despite the naming, these RAG pipeline files are not related to the Kubeflow pip
1. Create a new directory on your local machine and navigate to that directory.
1. Create a pipeline-requirements.txt file with the following contents:
1. Create a `pipeline-requirements.txt` file with the following contents:
```file {title="pipeline-requirements.txt"}
requests
Expand All @@ -361,7 +366,7 @@ Despite the naming, these RAG pipeline files are not related to the Kubeflow pip
llama-index-llms-openai-like
```
1. Create a rag-pipeline.py file with the following contents:
1. Create a file called `rag_pipeline.py` with the following contents. The filenames of both the `pipeline-requirements.txt` and `rag_pipeline.py` files should not be changed as they are referenced within the Open WebUI Pipeline configuration file.
```file {title="rag-pipeline.py"}
"""
Expand All @@ -388,7 +393,7 @@ Despite the naming, these RAG pipeline files are not related to the Kubeflow pip
print(f"on_startup:{__name__}")
Settings.embed_model = HuggingFaceEmbedding(
model_name="BAAI/bge-large-en-v1.5"
model_name="sentence-transformers/all-MiniLM-L6-v2"
)
llm = OpenAILike(
Expand All @@ -399,7 +404,7 @@ Despite the naming, these RAG pipeline files are not related to the Kubeflow pip
Settings.llm = llm
vector_store = MilvusVectorStore(uri="http://my-release-milvus.default.svc.cluster.local:19530", collection="linode_docs", dim=1024, overwrite=False)
vector_store = MilvusVectorStore(uri="http://my-release-milvus.default.svc.cluster.local:19530", collection="linode_docs", dim=384, overwrite=False)
self.index = VectorStoreIndex.from_vector_store(vector_store=vector_store)
async def on_shutdown(self):
Expand Down Expand Up @@ -581,8 +586,12 @@ Now that the chatbot has been configured, the final step is to access the chatbo
1. The first time you access this interface you are prompted to create an admin account. Do this now and then continue once you are successfully logged in using that account.
1. You are now presented with the chatbot interface. Within the dropdown menu, you should be able to select from several models. Select one and ask it a question.
1. You should now be presented with the chatbot interface. Within the dropdown menu, you should be able to select from several models. Select one and ask it a question.
- The **llama3** model uses information that was trained by other data sources (not your own custom data). If you ask this model a question, the data from your own dataset is not used.
![Screenshot of a Llama 3 query in Open WebUI](open-webui-llama3.jpg)
- The **llama3** model will just use information that was trained by other data sources (not your own custom data). If you ask this model a question, the data from your own dataset will not be used.
- The **RAG Pipeline** model that you defined in a previous section does use data from your custom dataset. Ask it a question relevant to your data and the chatbot should respond with an answer that is informed by the custom dataset you configured.
- The **RAG Pipeline** model that you defined in a previous section does indeed use data from your custom dataset. Ask it a question relevant to your data and the chatbot should respond with an answer that is informed by the custom dataset you configured.
![Screenshot of a RAG Pipeline query in Open WebUI](open-webui-rag-pipeline.jpg)
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit e6bfa65

Please sign in to comment.