Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flexibility on choosing which model to use #94

Open
vishiivivek opened this issue Feb 16, 2025 · 1 comment
Open

Flexibility on choosing which model to use #94

vishiivivek opened this issue Feb 16, 2025 · 1 comment

Comments

@vishiivivek
Copy link

vishiivivek commented Feb 16, 2025

Description
Currently, deep-research appears to use a predefined LLM (i.e o3-mini) for conducting deep research. However, it would be beneficial to allow users to choose which LLM they want to use based on their preferences, access, or API availability.

Proposed Enhancement:

  • Introduce a configuration setting (e.g., in a config file or as an argument) that allows users to specify which LLM they want to use.
  • Ensure that the integration is modular so that adding support for new models in the future is straightforward.

Expected Benefits

  • Provides users with flexibility in selecting models based on cost, availability, or performance.
  • Increases the project's adaptability as new LLMs emerge.

Request for Approval
I am a beginner in open-source contributions and would love to work on implementing this feature. Would it be okay for me to work on this? If the maintainers have any preferences on how this should be implemented, I’d be happy to follow those guidelines.

@KertLynx
Copy link

I think we can use any OpenAI compatible LLM by adding following values

OPENAI_ENDPOINT=""
OPENAI_MODEL=""

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants