We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
In the current code of Azure Open AI client chat creation:
def chat_completions_create(self, model, messages, **kwargs): url = f"https://{model}.westus3.models.ai.azure.com/v1/chat/completions" url = f"https://{self.base_url}/chat/completions" if self.base_url: url = f"{self.base_url}/chat/completions"
This would result in a request with url: https://openai-[region].openai.azure.com/openai/deployments/[deployment name]/chat/completions
and it would then result in a 404 resources not found error
The actual observed url should be https://openai-[region].openai.azure.com/openai/deployments/[deployment name]/chat/completions?api-version=[version]
And last but not least, I like this library, it's faster than LiteLLM and as other mentioned, cleaner.
The text was updated successfully, but these errors were encountered:
@ksolo when we are planning to add the solution of this?
Sorry, something went wrong.
when is the next release planned for?
when is the next release planned for? We are also waiting for this update. thanks
@ksolo any updates on this issue?
@ksolo is there any updates when this will be fixed
ksolo
No branches or pull requests
In the current code of Azure Open AI client chat creation:
This would result in a request with url:
https://openai-[region].openai.azure.com/openai/deployments/[deployment name]/chat/completions
and it would then result in a 404 resources not found error
The actual observed url should be
https://openai-[region].openai.azure.com/openai/deployments/[deployment name]/chat/completions?api-version=[version]
And last but not least, I like this library, it's faster than LiteLLM and as other mentioned, cleaner.
The text was updated successfully, but these errors were encountered: