-
Notifications
You must be signed in to change notification settings - Fork 120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow to output the resulting table to a file #22
Comments
There's the JSON saving option right now. We can add a On Sunday, September 13, 2015, Ivan Smirnov [email protected]
Thanks, |
Yep, JSON is nice but you can't publish it directly, like Python coverage reports which can produce nice HTML that you can then just publish as an artifact and provide a way to view benchmarks for any build in previous history. Of course, I could implement a script that reads json and outputs results into txt/csv/html but it sounds like something that should be done internally. |
Or maybe a |
It's not really a log though (which implies something machine-readable), I primarily meant for more human-readable formats like csv or txt or HTML? Please see my comment in #20 -- if the data collection and reporting logic are decoupled then it would be simple to add any number of output backends like csv/html/txt/json/terminal/pygal. |
A related improvement could be being able to emit the table in a somewhat standard format, for example as a reST table (see texttable). |
Well, we could have an option to output restructuredtext but it's not as rich as html. Coloring and alignment are tricky there. |
@ionelmc Just wanted to check if there's any plan to implement this, or if there's a workaround using the hooks? |
You can read the data files (that you get when using |
@ionelmc Ideally I'm hoping to construct a table with both current ( Since the benchmark.json ultimately DOES contain both commit information and benchmark details, is there a way to capture both comparison data and commit information via hook? |
I think the only way would be to extend the pytest_benchmark_group_stats hook to also pass a commit_info argument... |
@ionelmc I do think it would be useful. What's the procedure for introducing something like this for your repo? (ie fork) |
See #230 (comment) for an example to use the saved benchmark json to generate html tables in pytest html output file, if it helps. |
This would be extremely useful if you run this as a part of continuous integration -- currently, the images can be saved (per each benchmark), but the resulting table itself cannot. Grepping through test logs on a build server is not fun -- would be much nicer if the benchmarks could be pulled out.
If it was possible to dump the results into a file (txt, csv or maybe even a nice formatted html, kind of like coverage does), then the benchmarks could be automatically published on each build as test artifacts.
The text was updated successfully, but these errors were encountered: