-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[chore] Fix MergeSplit logic, update tests to use Request #12170
[chore] Fix MergeSplit logic, update tests to use Request #12170
Conversation
…ty sender Signed-off-by: Bogdan Drutu <[email protected]>
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #12170 +/- ##
==========================================
- Coverage 91.82% 91.82% -0.01%
==========================================
Files 463 463
Lines 24776 24773 -3
==========================================
- Hits 22750 22747 -3
Misses 1644 1644
Partials 382 382 ☔ View full report in Codecov by Sentry. |
fc580d0
to
08e997e
Compare
Signed-off-by: Bogdan Drutu <[email protected]>
08e997e
to
7e52446
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we eliminate cached item counts? I wonder if it's a premature optimization.
req.setCachedItemsCount(req.ItemsCount() + req2.ItemsCount()) | ||
req2.setCachedItemsCount(0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This makes me wonder what the real benefit of having a cached items count is? How many times do we expect that value to be read, aside from once by the observability sender?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi Joshua! This optimization comes from the observation that ItemsCount()
is repeatedly called during merge-splitting and is a fairly expensive operation. The optimization has proved to save 30 - 75% cpu time for merging based on the benchmark (see more details in #12136)
I understand your concern around having an additional variable to book-keep could be error-prone, but I would argue that no one, except for the batcher, should be messing around with the request once its created. We just need to make sure that the cached items count is updated when the wrapped proto is updated.
I would prefer not to #12136 |
No need for a changelog since the cache items was added this release.
Depends on #12169