-
Notifications
You must be signed in to change notification settings - Fork 308
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ollama配置问题 #286
Comments
change the url ,like http://host.docker.internal:11434 |
我在网站上配置时使用的就是http://host.docker.internal:11434/ ,报同样的错 |
baseurl should add prefix of |
vectorize_model: &vectorize_model 以上配置在开发模式配置中可以创建知识库。 |
您好,我按照您的说法,加上了前缀 |
https://openspg.yuque.com/ndx6g9/docs/iu5cok24efl1z2nc docker exec -it ...进入容器,查得网关地址为172.20.0.1. curl 172.20.0.1:11434/v1,回复是Ollama is running. ---end--- |
Search before asking
Operating system information
Windows
What happened
问题:设置ollama生成模型在聊天时会报错:
执行失败pemja.core.PythonException: <class 'tenacity.RetryError'>: <Future at 0x7f8d69e3fc70 state=finished raised RuntimeError>
at /openspg_venv/lib/python3.8/site-packages/kag/solver/main_solver.invoke(main_solver.py:94)
at /openspg_venv/lib/python3.8/site-packages/kag/solver/logic/solver_pipeline.run(solver_pipeline.py:67)
at /openspg_venv/lib/python3.8/site-packages/kag/solver/implementation/default_reasoner.reason(default_reasoner.py:64)
at /openspg_venv/lib/python3.8/site-packages/kag/solver/execute/default_lf_executor.execute(default_lf_executor.py:239)
at /openspg_venv/lib/python3.8/site-packages/kag/solver/execute/default_lf_executor._execute_lf(default_lf_executor.py:204)
at /openspg_venv/lib/python3.8/site-packages/kag/solver/execute/default_lf_executor._execute_chunk_answer(default_lf_executor.py:154)
at /openspg_venv/lib/python3.8/site-packages/kag/solver/retriever/impl/default_chunk_retrieval.recall_docs(default_chunk_retrieval.py:425)
How to reproduce
我的yaml配置:
openie_llm: &openie_llm
base_url: http://localhost:11434/
model: qwen2_0.5b_instruct:latest
type: ollama
chat_llm: &chat_llm
base_url: http://localhost:11434/
model: qwen2_0.5b_instruct:latest
type: ollama
vectorize_model: &vectorize_model
api_key: empty
base_url: http://localhost:11434/v1/
model: bge-m3:latest #qwen2_0.5b_instruct:latest
type: openai
vector_dimensions: 1024
vectorizer: *vectorize_model
ollama版本0.5.6
Are you willing to submit PR?
The text was updated successfully, but these errors were encountered: