You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To get the new column succ actually displayed, I had to also:
Add succ to pytest_benchmark.utils.ALLOWED_COLUMNS.
Overwrite pytest_benchmark.table.display so it shows succ.
(How exactly to achieve those two things is left an an exercise for the reader.)
While this does work, I am unsure if my solution could be upstreamed easily.
How should I do it if I want my solution to be merged into pytest-benchmark?
Alternate and related approaches:
Add an argument to benchmark.pedantic that makes it continue on exceptions, but gives it an argument of the list of exceptions caught (like [None, None, RuntimeError, None, RuntimeError]).
Add an argument to benchmark.pedantic to change the return type to a list of all results, then set up the benchmarked function so that it catches relevant exceptions and returns whatever I want.
Allow extra_info keys in the terminal table.
The text was updated successfully, but these errors were encountered:
Is there currently a way to omit failed tests from the timing statistics?
If we have nondeterminism and record a success rate, it might be desirable to only account for successful runs in the statistics.
I have a use case for tracking the performance and success rate of non-deterministic functions.
The following function serves to outline the scenario:
I have played around and arrived at the following result:
To get the new column
succ
actually displayed, I had to also:succ
topytest_benchmark.utils.ALLOWED_COLUMNS
.pytest_benchmark.table.display
so it showssucc
.(How exactly to achieve those two things is left an an exercise for the reader.)
While this does work, I am unsure if my solution could be upstreamed easily.
How should I do it if I want my solution to be merged into
pytest-benchmark
?Alternate and related approaches:
benchmark.pedantic
that makes it continue on exceptions, but gives it an argument of the list of exceptions caught (like[None, None, RuntimeError, None, RuntimeError]
).benchmark.pedantic
to change the return type to a list of all results, then set up the benchmarked function so that it catches relevant exceptions and returns whatever I want.extra_info
keys in the terminal table.The text was updated successfully, but these errors were encountered: