You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In many cases benchmarks can be useful to collect other merit factors in addition to timings.
For instance, when using some optimization algorithms you may want to collect data about the quality of the reached result, so that the benchmarks can be employed to evaluate which code offers the best speed-quality trade off.
In addition to that, in many cases, the additional merit factor may show a variability, depending on the specific code run (as for the timings). Again some heuristic optimization codes can be an example, where multiple run may deliver different solutions scattered around the exact optimum. In this case, even these additional merit factors may need to be evaluated statistically, based on multiple runs.
Would be great if pytest-benchmark could offer a way to deal with these situations. The extra-info field is already a very good starting point. However, being able to customize the reporting is also needed.
The text was updated successfully, but these errors were encountered:
callegar
changed the title
Wishlist: hook (or other means) to collect more data over benchmarking tests and report it
Wishlist: hook (or other means) to collect other merit factors over benchmarking tests and report it
Sep 8, 2024
In many cases benchmarks can be useful to collect other merit factors in addition to timings.
For instance, when using some optimization algorithms you may want to collect data about the quality of the reached result, so that the benchmarks can be employed to evaluate which code offers the best speed-quality trade off.
In addition to that, in many cases, the additional merit factor may show a variability, depending on the specific code run (as for the timings). Again some heuristic optimization codes can be an example, where multiple run may deliver different solutions scattered around the exact optimum. In this case, even these additional merit factors may need to be evaluated statistically, based on multiple runs.
Would be great if pytest-benchmark could offer a way to deal with these situations. The
extra-info
field is already a very good starting point. However, being able to customize the reporting is also needed.The text was updated successfully, but these errors were encountered: