Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ollama配置问题 #286

Open
1 of 2 tasks
2314254971 opened this issue Jan 16, 2025 · 6 comments
Open
1 of 2 tasks

ollama配置问题 #286

2314254971 opened this issue Jan 16, 2025 · 6 comments

Comments

@2314254971
Copy link

2314254971 commented Jan 16, 2025

Search before asking

  • I had searched in the issues and found no similar issues.

Operating system information

Windows

What happened

问题:设置ollama生成模型在聊天时会报错:
执行失败pemja.core.PythonException: <class 'tenacity.RetryError'>: <Future at 0x7f8d69e3fc70 state=finished raised RuntimeError>
at /openspg_venv/lib/python3.8/site-packages/kag/solver/main_solver.invoke(main_solver.py:94)
at /openspg_venv/lib/python3.8/site-packages/kag/solver/logic/solver_pipeline.run(solver_pipeline.py:67)
at /openspg_venv/lib/python3.8/site-packages/kag/solver/implementation/default_reasoner.reason(default_reasoner.py:64)
at /openspg_venv/lib/python3.8/site-packages/kag/solver/execute/default_lf_executor.execute(default_lf_executor.py:239)
at /openspg_venv/lib/python3.8/site-packages/kag/solver/execute/default_lf_executor._execute_lf(default_lf_executor.py:204)
at /openspg_venv/lib/python3.8/site-packages/kag/solver/execute/default_lf_executor._execute_chunk_answer(default_lf_executor.py:154)
at /openspg_venv/lib/python3.8/site-packages/kag/solver/retriever/impl/default_chunk_retrieval.recall_docs(default_chunk_retrieval.py:425)

How to reproduce

我的yaml配置:
openie_llm: &openie_llm
base_url: http://localhost:11434/
model: qwen2_0.5b_instruct:latest
type: ollama

chat_llm: &chat_llm
base_url: http://localhost:11434/
model: qwen2_0.5b_instruct:latest
type: ollama

vectorize_model: &vectorize_model
api_key: empty
base_url: http://localhost:11434/v1/
model: bge-m3:latest #qwen2_0.5b_instruct:latest
type: openai
vector_dimensions: 1024
vectorizer: *vectorize_model

ollama版本0.5.6

Are you willing to submit PR?

  • Yes I am willing to submit a PR!
@BBC-9527
Copy link

change the url ,like http://host.docker.internal:11434

@2314254971
Copy link
Author

host.docker.internal

我在网站上配置时使用的就是http://host.docker.internal:11434/ ,报同样的错
image
image

@caszkgui
Copy link
Collaborator

caszkgui commented Jan 16, 2025

host.docker.internal

我在网站上配置时使用的就是http://host.docker.internal:11434/ ,报同样的错 image image

baseurl should add prefix of http://

Image

@hanwsf
Copy link

hanwsf commented Jan 19, 2025

vectorize_model: &vectorize_model
api_key: empty
base_url: http://127.0.0.1:11434/v1
model: bge-m3:latest
type: openai
vector_dimensions: 1024
vectorizer: *vectorize_model

以上配置在开发模式配置中可以创建知识库。
但是在产品模式中总是提示unknown error
<class 'RuntimeError'>: invalid vectorizer config: Connection error.

@2314254971
Copy link
Author

host.docker.internal

我在网站上配置时使用的就是http://host.docker.internal:11434/ ,报同样的错 image image

baseurl should add prefix of http://

Image

您好,我按照您的说法,加上了前缀

Image
但是问答还是有概率报错,图片如下

Image

Image

@hanwsf
Copy link

hanwsf commented Jan 31, 2025

https://openspg.yuque.com/ndx6g9/docs/iu5cok24efl1z2nc
已经解决。

docker exec -it ...进入容器,查得网关地址为172.20.0.1. curl 172.20.0.1:11434/v1,回复是Ollama is running.
不要使用ollama类型本地模型。
在maas类型下增加模型。
http://172.20.0.1:11434/v1

---end---

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants