Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] 设置了 DEFAULT_AGENT_CONFIG 后系统助手依然是 gpt-4o-mini #6014

Open
BrikerMan opened this issue Feb 11, 2025 · 4 comments
Open
Labels
🐛 Bug Something isn't working | 缺陷 ollama Relative to Ollama Provider and ollama models

Comments

@BrikerMan
Copy link

📦 Environment

Docker

📌 Version

v1.52.16

💻 Operating System

Ubuntu

🌐 Browser

Chrome

🐛 Bug Description

已经设置了 DEFAULT_AGENT_CONFIG,并且 ENABLED_OPENAI=0,但是 默认的系统助手依然使用 gpt-4o-mini

version: '3.8'

services:
  lobe-chat:
    image: lobehub/lobe-chat
    container_name: lobe-chat
    restart: always
    ports:
      - '3210:3210'
    env_file:
      - .env
DEFAULT_AGENT_CONFIG='model=deepseek-r1:32b;provider=ollama'
FEATURE_FLAGS="-knowledge_base"

ENABLED_OPENAI=0

ENABLED_OLLAMA=1
OLLAMA_PROXY_URL=http://192.168.2.206:11434
OLLAMA_MODEL_LIST="+deepseek-r1:32b,+qwen2.5:latest"
OLLAMA_ORIGINS="*"

LLM_VISION_IMAGE_USE_BASE64=1

📷 Recurrence Steps

  • 使用 docker compose 启动 lobe chat
  • 使用匿名浏览方式打开 lobechat 查看初始设置

🚦 Expected Behavior

希望可以把 agent 使用的模型替换掉

📝 Additional Information

No response

@BrikerMan BrikerMan added the 🐛 Bug Something isn't working | 缺陷 label Feb 11, 2025
@github-project-automation github-project-automation bot moved this to Roadmap - Chat 1.x in Lobe Chat Routine Feb 11, 2025
@lobehubbot
Copy link
Member

👀 @BrikerMan

Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible.
Please make sure you have given us as much context as possible.
非常感谢您提交 issue。我们会尽快调查此事,并尽快回复您。 请确保您已经提供了尽可能多的背景信息。

Copy link

dosubot bot commented Feb 11, 2025

In Lobe Chat version 1.52.16, there is a known issue where the default model doesn't apply on the first page load. The application initially defaults to gpt-4o-mini and requires a page refresh to apply the configured default model. This issue can be resolved by rolling back to Lobe Chat version 1.51.3 [1].

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

@dosubot dosubot bot added the ollama Relative to Ollama Provider and ollama models label Feb 11, 2025
@BrikerMan
Copy link
Author

回滚 v1.51.3 并没有解决 Topic Naming Model,Translation Assistant,Conversation History Model 和 Assistant Metadata Generation Model 为 gpt-4o-mini 的问题 @dosu

Copy link

dosubot bot commented Feb 11, 2025

在版本 1.51.3 中,仍然存在一个已知问题,即某些模型配置(如 Topic Naming Model、Translation Assistant、Conversation History Model 和 Assistant Metadata Generation Model)可能会默认使用 gpt-4o-mini,即使已设置其他默认模型。这可能是由于配置未正确应用或在更新后未保留的已知问题 [1]。目前,您可能需要手动检查和调整这些模型的配置,以确保它们使用您期望的模型。

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🐛 Bug Something isn't working | 缺陷 ollama Relative to Ollama Provider and ollama models
Projects
Status: Roadmap - Chat 1.x
Development

No branches or pull requests

2 participants