-
Notifications
You must be signed in to change notification settings - Fork 915
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Observability] Add observability support for function calling. #1924
Comments
I have encountered exactly the same issue when using Spring Ai, it would be great if the function calling branch could be displayed in the tracing result. |
By the way, we are also attempting to use the Additionally, I have already raised this issue in the OpenTelemetry community: |
Observability for function calls would help me pinpoint performance bottlenecks more accurately. Looking forward to this feature! |
@Cirilla-zmh thanks for submitting this issue. I've started working on an implementation for this. It's worth noting that Spring AI is instrumented the same way the rest of the Spring portfolio, based on the Micrometer Observation API. You can export traces and metrics via OTLP to an OpenTelemetry backend by adding the needed Micrometer dependencies. Using You can find full observability examples for models and vector stores here: https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/observability |
Cool! Thank you and look forward to seeing it soon. ;) As for best practices when using OpenTelemetry, let's discuss those under the OpenTelemetry project. |
Expected Behavior
The observability of Spring AI is quite impressive, and it has helped us resolve numerous issues. Thank you so much!
However, as a crucial component of the Spring AI, function callings lack support for observability. In fact, our agent application contains a significant number of function calling invocations, and we would like to see them as spans in the trace. This visibility would help us understand how the LLM organizes these function calls and allow us to see the input and output during the invocation process (it's important data for evaluate LLM correctness).
We have conducted some preliminary research and believe that adding observability instrumentation to the
org.springframework.ai.chat.model.AbstractToolCallSupport#executeFunctions
method would be a good solution. If you have plans to support this but are currently not available, we would be more than happy to contribute.Current Behavior
Function callings lack support for observability.
Context
No more context. If you have more you need to know, feel free to ask. :)
The text was updated successfully, but these errors were encountered: