You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
newest slowest test: Create awareness of what changes had negative impact.
projection of test runtime in 1 year: Allow evaluation of current development, help re-think current choices.
build runtime by hour of day: Infrastructure load throughout day.
top n slowest test runtime over time: Feedback on improvements around slow tests.
Feedback time from check-in: Difference between time of source control check-in and execution of build.
Meantime to pipeline recovery: How long does the team take to get a pipeline (not all builds) green again (thanks Duda)
Meantime to feedback: how long until some build fails in the pipeline with a real error (not flaky)
Build runtime variance: find builds which vary greatly in runtime (possibly due to external deps)
Maybe we can cluster test runtimes into up to 4-5 buckets, to get a quick overview on how fast the system's tests are. This would provide more transparency into where most of the time is spent.
Amount of manual re-runs (per day)
Bottleneck/throughput (following lean principles, what's keeping my pipeline from being faster)
The text was updated successfully, but these errors were encountered:
Show
The text was updated successfully, but these errors were encountered: