-
Notifications
You must be signed in to change notification settings - Fork 120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
lower level api #52
Comments
How come you don't wrap the |
I'm sorry my example was incorrect it's more like: @asyncio.coroutine
def process_docs(
docs_iterator, max_frequency, max_amount, processor,
doc_index_in_args=1,
duration_callback=None,
*args, **kwargs):
for doc in docs_iterator:
args[doc_index_in_args] = doc
p_t0 = default_timer()
processor(*args, **kwargs)
processor_time = default_timer()-p_t0
total_processor_time += processor_time
if duration_callback:
duration_callback(processor_time)
for x in asyncio.sleep(wait_t):
yield x
def test_something(benchmark, event_loop):
benchmark._mode = 'benchmark(...)'
stats = benchmark._make_stats(1)
def done_callback(future):
for task in unlimited_tasks:
if not task.done():
task.cancel()
event_loop.stop()
limited_primary_task = event_loop.create_task(primary_task_factory(es_data, duration_callback=stats.update))
# primary_task_factory will call `process_docs` internally
limited_task.add_done_callback(done_callback)
unlimited_tasks = []
for i in range(SECONDARY_TASKS_CONCURRENCY):
unlimited_tasks += [
event_loop.create_task(secondary_task_factory(es_data))
for secondary_task_factory in secondary_tasks_factories
]
try:
event_loop.run_forever()
finally:
event_loop.close()
for task in [limited_task] + unlimited_tasks:
if (not task.cancelled()) and task.exception():
raise task.exception() Processor is invoked multiple times (see coroutine in my initial message): this will cause exception in benchmark when Wrapping function with Wrapping only a "processor" with |
I still think something is wrong there - you shouldn't have runs with different data aggregated in the same benchmark. What the point with this? Get How fast is this My understanding is that you want to isolate the asyncio overhead (or whatever) and only get metrics for the "hot" part in your coroutines (the I think a better way would be to just isolate the |
I\m using pytest-benchmark for a kind of unrelated purpose: I'm testing API performance that is a part of our infrastructure. This is a ElasticSearch API and i'm benchmarking search using current index configuration and queries structure. ES performance have high dependency on data diversity and also depends on concurrent queries. So I need to
That's why I can't simply let pytest-benchmark to rerun call 500 times: I can't run parallel concurrent tasks with controlled frequency using current api. I have a workaround:
But this is not documented anywhere and I think that there may also be other possible usecases of pytest-benchmark where such low level api will be useful |
I need to read your comment a bit more but have you looked at the manual mode? http://pytest-benchmark.readthedocs.io/en/latest/pedantic.html |
Maybe my situation can be solved by using |
Hi guys and thanks for your work on this project. Just adding some feedback that might be related to this issue. I/we are looking to benchmark the SSH handshake process. We have some tests that need extra setup/teardown before each run. The The current code that we use is here: so maybe the At the same time I don't need to pass different arguments to the system under test , just setup and teardown code that is excluded from the benchmark counters I hope it makes sense |
Ref #264 - that should fix this. |
I'm currently using your module to benchmark some external api performance. Currently this is implemented using severala coroutines running concurrent parallel taks where one coroutine calls some "processor" and this one processor should be benchmarked. using weave here is possible but problematic bacause of complex "setup" and concurrency issues. I have a workaround to solve my problem:
It would be great to have some "legal" and documented way to do similar things. I had to read pytest-benchmark source code to do this. I think that public api for similar tasks should be exposed:
The text was updated successfully, but these errors were encountered: