Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow to output the resulting table to a file #22

Open
aldanor opened this issue Sep 13, 2015 · 12 comments
Open

Allow to output the resulting table to a file #22

aldanor opened this issue Sep 13, 2015 · 12 comments

Comments

@aldanor
Copy link

aldanor commented Sep 13, 2015

This would be extremely useful if you run this as a part of continuous integration -- currently, the images can be saved (per each benchmark), but the resulting table itself cannot. Grepping through test logs on a build server is not fun -- would be much nicer if the benchmarks could be pulled out.

If it was possible to dump the results into a file (txt, csv or maybe even a nice formatted html, kind of like coverage does), then the benchmarks could be automatically published on each build as test artifacts.

@aldanor aldanor changed the title Allow to output the resulting table Allow to output the resulting table to a file Sep 13, 2015
@ionelmc
Copy link
Owner

ionelmc commented Sep 13, 2015

There's the JSON saving option right now. We can add a
--benchmark-save-text or something I think ...

On Sunday, September 13, 2015, Ivan Smirnov [email protected]
wrote:

This would be extremely useful if you run this as a part of continuous
integration -- currently, the images can be saved (per each benchmark), but
the resulting table itself cannot. Grepping through test logs on a build
server is not fun -- would be much nicer if the benchmarks could be pulled
out.

If it was possible to dump the results into a file (txt, csv or maybe even
a nice formatted html, kind of like coverage does), then the benchmarks
could be automatically published on each build as test artifacts.


Reply to this email directly or view it on GitHub
#22.

Thanks,
-- Ionel Cristian Mărieș, http://blog.ionelmc.ro

@aldanor
Copy link
Author

aldanor commented Sep 13, 2015

Yep, JSON is nice but you can't publish it directly, like Python coverage reports which can produce nice HTML that you can then just publish as an artifact and provide a way to view benchmarks for any build in previous history.

Of course, I could implement a script that reads json and outputs results into txt/csv/html but it sounds like something that should be done internally.

@ionelmc
Copy link
Owner

ionelmc commented Sep 13, 2015

Or maybe a --benchmar-result-log=path? Pytest has the builtin --result-log=path option so a similar name would be good.

@ionelmc ionelmc added this to the v3.0.0 milestone Sep 13, 2015
@aldanor
Copy link
Author

aldanor commented Sep 14, 2015

It's not really a log though (which implies something machine-readable), I primarily meant for more human-readable formats like csv or txt or HTML?

Please see my comment in #20 -- if the data collection and reporting logic are decoupled then it would be simple to add any number of output backends like csv/html/txt/json/terminal/pygal.

@ionelmc ionelmc modified the milestones: v3.1.0, v3.0.0 Oct 28, 2015
@lelit
Copy link

lelit commented Dec 9, 2016

A related improvement could be being able to emit the table in a somewhat standard format, for example as a reST table (see texttable).

@ionelmc
Copy link
Owner

ionelmc commented Dec 9, 2016

Well, we could have an option to output restructuredtext but it's not as rich as html. Coloring and alignment are tricky there.

@ionelmc ionelmc modified the milestones: v3.2.0, v4.0.0 Jan 3, 2019
@risoms
Copy link

risoms commented Mar 13, 2023

@ionelmc Just wanted to check if there's any plan to implement this, or if there's a workaround using the hooks?

@ionelmc
Copy link
Owner

ionelmc commented Mar 14, 2023

You can read the data files (that you get when using --benchmark-save) and convert that json to whatever you like.

@risoms
Copy link

risoms commented Mar 22, 2023

@ionelmc Ideally I'm hoping to construct a table with both current (NOW) and historic comparison (--benchmark-compare) data. Using the pytest_benchmark_generate_json does not provide comparison data, while pytest_benchmark_group_stats does not provide commit information.

Since the benchmark.json ultimately DOES contain both commit information and benchmark details, is there a way to capture both comparison data and commit information via hook?

@ionelmc
Copy link
Owner

ionelmc commented Mar 23, 2023

I think the only way would be to extend the pytest_benchmark_group_stats hook to also pass a commit_info argument...

@risoms
Copy link

risoms commented Mar 23, 2023

@ionelmc I do think it would be useful.

What's the procedure for introducing something like this for your repo? (ie fork)

@whyzdev
Copy link

whyzdev commented Nov 17, 2023

See #230 (comment) for an example to use the saved benchmark json to generate html tables in pytest html output file, if it helps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants