-
-
Notifications
You must be signed in to change notification settings - Fork 11.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] 设置了OPENAI_MODEL_LIST和ENABLED_OLLAMA为0但是第一次进入时还是会出现gpt-4o-mini这个模型 #6134
Comments
Hey, @cshitian! I'm here to help you with any bugs, questions, or contributions you have. Let's squash those bugs together! 🤖 This issue of the "gpt-4o-mini" model appearing initially despite setting To troubleshoot, ensure that your Docker environment variables are correctly set. You can try entering the model configuration interface and clicking the confirmation button without making any changes, which might force the system to display the correct model capabilities. If the problem persists, consider checking for any additional configurations that might affect the model selection during initialization [1][2]. To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other |
This issue is closed, If you have any questions, you can comment and reply. |
🎉 This issue has been resolved in version 1.56.2 🎉 The release is available on: Your semantic-release bot 📦🚀 |
问题依旧存在。我是反代的域名访问。 |
📦 部署环境
Docker
📌 软件版本
v1.53.12
💻 系统环境
Ubuntu
🌐 浏览器
Chrome
🐛 问题描述
-e OPENAI_MODEL_LIST=-all,+Qwen/Qwen2.5-72B-Instruct-128K,+deepseek-ai
-e DEFAULT_AGENT_CONFIG="model=Qwen/Qwen2.5-72B-Instruct-128K;provider=openai;"
-e ENABLED_OLLAMA=0 \
需要刷新后才能显示默认模型
📷 复现步骤
No response
🚦 期望结果
No response
📝 补充信息
No response
The text was updated successfully, but these errors were encountered: