Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Metrics endpoint for shortfin apps (llm / sd) #900

Open
renxida opened this issue Feb 3, 2025 · 2 comments
Open

Metrics endpoint for shortfin apps (llm / sd) #900

renxida opened this issue Feb 3, 2025 · 2 comments

Comments

@renxida
Copy link
Contributor

renxida commented Feb 3, 2025

Like this: https://docs.vllm.ai/en/latest/serving/metrics.html (suggested by @ScottTodd )

@renxida
Copy link
Contributor Author

renxida commented Feb 3, 2025

At a minimum I think we should have metrics to figure out whether shortfin is waiting for IREE or if we're not saturating the device.

@ScottTodd
Copy link
Member

I see two partially overlapping projects:

  1. For developers working on technologies in the IREE/shortfin/ROCm/shark-ai/etc. tech stack, we want to be able to qualitatively and quantitatively analyze serving performance then test how changes impact performance.
    • We could consider serving performance to be an extension of inference performance, just with multiple requests in flight concurrently, but there are new metrics to consider that are unique to serving. For example, how long on average each request spends queued up, how long on average each request takes to be completed, how many requests are running at a given time, how many requests hit some fast path (e.g. parameters and program instructions already loaded into memory vs needing to be fetched from disk), etc.
    • Our existing profiling tools such as Tracy may be useful at this increased scale, either as-is today, with some additional instrumentation markers added (e.g. tagging each request), or even with deeper changes to the Tracy UI.
  2. For production users, they will want to know how efficiently they are using the available hardware and what configuration changes they can make to achieve better results.
    • We shouldn't expect these users to run Tracy on serving snapshots that are short in duration (< 10 minutes), analyze trace files, or make source code changes to the compiler and runtime. They'll want something that can provide a summary and steer them towards actions they are capable of taking.

Some reference pages for inspiration:

Having each server expose a metrics endpoint will allow for monitor dashboards for long running servers and give us some way to extract metrics for automated tests too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants