A large language model that takes any website as an input together with a user query prompt and returns a suitable human response. This program uses a llama3 model locally hosted to return human readable and understandable results.
- python = "^3.10"
- streamlit = "^1.36.0"
- langchain = "^0.2.6"
- llama 3
- huggingface-hub = "^0.23.4"
- python-dotenv = "^1.0.1"
- langchain-community = "^0.2.6"
- langchain-ollama = "^0.1.0"
- beautifulsoup4 = "^4.12.3"
- faiss-cpu = "^1.8.0.post1"
- sphinx = "^8.0.2"
- loguru = "^0.7.2"
- fire = "^0.6.0"
- install llama3 and run server locally - using the command - ollama run llama3
- Install the package dependencies using poetry install command. Ensure you do this in a virual environment.
- navigate to the q_and_a sub-folder and run the command - streamlit run main.py
- Good luck !