Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wishlist: hook (or other means) to collect other merit factors over benchmarking tests and report it #266

Open
callegar opened this issue Sep 8, 2024 · 0 comments

Comments

@callegar
Copy link

callegar commented Sep 8, 2024

In many cases benchmarks can be useful to collect other merit factors in addition to timings.

For instance, when using some optimization algorithms you may want to collect data about the quality of the reached result, so that the benchmarks can be employed to evaluate which code offers the best speed-quality trade off.

In addition to that, in many cases, the additional merit factor may show a variability, depending on the specific code run (as for the timings). Again some heuristic optimization codes can be an example, where multiple run may deliver different solutions scattered around the exact optimum. In this case, even these additional merit factors may need to be evaluated statistically, based on multiple runs.

Would be great if pytest-benchmark could offer a way to deal with these situations. The extra-info field is already a very good starting point. However, being able to customize the reporting is also needed.

@callegar callegar changed the title Wishlist: hook (or other means) to collect more data over benchmarking tests and report it Wishlist: hook (or other means) to collect other merit factors over benchmarking tests and report it Sep 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant