Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] 设置了OPENAI_MODEL_LIST和ENABLED_OLLAMA为0但是第一次进入时还是会出现gpt-4o-mini这个模型 #6134

Closed
cshitian opened this issue Feb 14, 2025 · 4 comments · Fixed by #6186
Labels
🐛 Bug Something isn't working | 缺陷 ollama Relative to Ollama Provider and ollama models released

Comments

@cshitian
Copy link

📦 部署环境

Docker

📌 软件版本

v1.53.12

💻 系统环境

Ubuntu

🌐 浏览器

Chrome

🐛 问题描述

-e OPENAI_MODEL_LIST=-all,+Qwen/Qwen2.5-72B-Instruct-128K,+deepseek-ai
-e DEFAULT_AGENT_CONFIG="model=Qwen/Qwen2.5-72B-Instruct-128K;provider=openai;"
-e ENABLED_OLLAMA=0 \

需要刷新后才能显示默认模型

Image

📷 复现步骤

No response

🚦 期望结果

No response

📝 补充信息

No response

@cshitian cshitian added the 🐛 Bug Something isn't working | 缺陷 label Feb 14, 2025
@github-project-automation github-project-automation bot moved this to Roadmap - Chat 1.x in Lobe Chat Routine Feb 14, 2025
@dosubot dosubot bot added the ollama Relative to Ollama Provider and ollama models label Feb 14, 2025
Copy link

dosubot bot commented Feb 14, 2025

Hey, @cshitian! I'm here to help you with any bugs, questions, or contributions you have. Let's squash those bugs together! 🤖

This issue of the "gpt-4o-mini" model appearing initially despite setting OPENAI_MODEL_LIST and ENABLED_OLLAMA to 0 is a known bug in LobeChat. It was present in earlier versions and required a page refresh to apply the default model configuration. This bug has been addressed in version 1.53.6, so it should not occur in version 1.53.12. However, if you are still experiencing this issue, it might be due to a specific configuration or deployment setup.

To troubleshoot, ensure that your Docker environment variables are correctly set. You can try entering the model configuration interface and clicking the confirmation button without making any changes, which might force the system to display the correct model capabilities. If the problem persists, consider checking for any additional configurations that might affect the model selection during initialization [1][2].

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other

@github-project-automation github-project-automation bot moved this from Roadmap - Chat 1.x to Done in Lobe Chat Routine Feb 16, 2025
@lobehubbot
Copy link
Member

@cshitian

This issue is closed, If you have any questions, you can comment and reply.
此问题已经关闭。如果您有任何问题,可以留言并回复。

@lobehubbot
Copy link
Member

🎉 This issue has been resolved in version 1.56.2 🎉

The release is available on:

Your semantic-release bot 📦🚀

@cshitian
Copy link
Author

问题依旧存在。我是反代的域名访问。
直接访问域名 会显示所有模型。
但是访问https://域名/chat,则只会显示我设置的模型。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🐛 Bug Something isn't working | 缺陷 ollama Relative to Ollama Provider and ollama models released
Projects
Status: Done
Development

Successfully merging a pull request may close this issue.

2 participants