You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Python has a built-in source of noise: the hash seed is randomized. Noise is bad because you can't e.g. tell if a 3% slowdown is noise, or due to a code change. How to get rid of noise?:
Setting a fixed value (export PYTHONHASHSEED=123) for the hash seed can give distorted results: maybe your tweaked code is faster with a particular fixed seed, but slower with other seeds. It also doesn't help with other sources of randomness.
The other approach is to run the benchmarks multiple times in multiple processes, to get a variety of seeds, and then use the combined results to report speed. This allows averaging out the impact of different random seeds, and results in less noise. This is ideally something the pytest-benchmark framework would do, so it can aggregate different benchmarks' results into one final result.
The text was updated successfully, but these errors were encountered:
Python has a built-in source of noise: the hash seed is randomized. Noise is bad because you can't e.g. tell if a 3% slowdown is noise, or due to a code change. How to get rid of noise?:
Setting a fixed value (
export PYTHONHASHSEED=123
) for the hash seed can give distorted results: maybe your tweaked code is faster with a particular fixed seed, but slower with other seeds. It also doesn't help with other sources of randomness.The other approach is to run the benchmarks multiple times in multiple processes, to get a variety of seeds, and then use the combined results to report speed. This allows averaging out the impact of different random seeds, and results in less noise. This is ideally something the
pytest-benchmark
framework would do, so it can aggregate different benchmarks' results into one final result.The text was updated successfully, but these errors were encountered: