-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Show current LLM model used #2001
base: main
Are you sure you want to change the base?
Conversation
FadhlanR
commented
Jan 6, 2025
•
edited
Loading
edited
For an initial list, I've used a filtered set here https://openrouter.ai/models?fmt=cards&order=newest&supported_parameters=tools openai/gpt-4o openai/gpt-4o-mini anthropic/claude-3.5-sonnet google/gemini-pro-1.5 We can try out other models and add them if they seem to be working, this gives a reasonable list of some popular models right now, which have O1 will be good to add, but it is not currently generally available via openrouter and the newer google models are all marked experimental (which would be fine, but they make them free then heavily rate limit them so they quickly break). |
Currently, I obtain the list by calling |
Yep, this can be how we have the default list. |
5c159cd
to
3e66444
Compare
'google/gemini-pro-1.5', | ||
'openai/gpt-4o', | ||
'openai/gpt-4o-mini', | ||
]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if this is the right place to put DEFAULT_LLM
and DEFAULT_LLM_LIST
constants but I put them here to make it available for host and ai-bot.
I am running |
@jurgenwerk I just tested it again in my local with two models and it worked. I couldn't reproduce it. It seems the value of response.error is not a string, I guess we need to capture the error structure so we can update how we handle the error in fetchGenerationCost. |
@@ -570,6 +574,16 @@ export function isCommandEvent( | |||
); | |||
} | |||
|
|||
function getModel(eventlist: DiscreteMatrixEvent[]): string { | |||
let activeLLMEvent = eventlist.findLast( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we be reading this from room state instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could read this from the room state, but I chose to read it from the event list to make testing easier. Additionally, this approach is consistent with how we read skills from the room.
I tried with a local key and can see the requests going to the right providers even if they respond with what has been said before. I found that when changing the room however the model didn't change in the dropdown Screencast.from.09-01-25.15.49.19.webm |
I've not been able to get sonnet working either, not sure why but I think it's unrelated to this work edit - found the issue. If the last message is from the assistant, claude breaks. I'll create a test and a fix in a different PR, it's a small change. |
Fixed! please check it again. |