-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add retry logic to each batch method of the GCS IO #33539
base: master
Are you sure you want to change the base?
Conversation
b8304da
to
637bc02
Compare
A transient error might occur when writing a lot of shards to GCS, and right now the GCS IO does not have any retry logic in place: https://github.com/apache/beam/blob/a06454a2/sdks/python/apache_beam/io/gcp/gcsio.py#L269 It means that in such cases the entire bundle of elements fails, and then Beam itself will attempt to retry the entire bundle, and will fail the job if it exceeds the number of retries. This change adds new logic to retry only failed requests, and uses the typical exponential backoff strategy. Note that this change accesses a private method (`_predicate`) of the retry object, which we could avoid by basically copying the logic over here. But existing code already accesses `_responses` property so maybe it's not a big deal. https://github.com/apache/beam/blob/b4c3a4ff/sdks/python/apache_beam/io/gcp/gcsio.py#L297 Existing (unresolved) issue in the GCS client library: googleapis/python-storage#1277
637bc02
to
489ea9f
Compare
Checks are failing. Will not request review until checks are succeeding. If you'd like to override that behavior, comment |
assign set of reviewers |
Assigning reviewers. If you would like to opt out of this review, comment R: @jrmccluskey for label python. Available commands:
The PR bot will only process comments in the main thread (not review comments). |
Thanks for contributing to improve gcsio. It looks great to me from a brief look. I will take a closer look tomorrow. |
|
||
try: | ||
run_with_retry() | ||
except GoogleCloudError: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What kind of exception do you want to skip here? The retry timeout error?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To keep the behaviour the same as before this change. copy_batch
by itself doesn't throw any exceptions (at least in 429 cases). Without this line this function would throw an exception in cases when we ran out of retries.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Make sense. I think my question is more like whether GoogleCloudError
is the correct exception type to skip here.
I think internally we are calling Retry under google.api_core. If that's true, then the exception type should be this (https://github.com/googleapis/python-api-core/blob/91829160815cd97219bca1d88cc19a72c9a6e935/google/api_core/exceptions.py#L76).
I am not familiar with GoogleCloudError
nor what subtypes of exceptions are covered with it. That's why I would hope to see a test to simulate a retry failure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok. I did some digging:
GoogleCloudError
==google.api_core.exceptions.GoogleAPICallError
<-google.api_core.exceptions.GoogleAPIError
.RetryError
<-google.api_core.exceptions.GoogleAPIError
.
The GoogleAPICallError
covers common errors like (client errors (404, 429, etc) or redirections):
https://github.com/googleapis/python-api-core/blob/91829160815cd97219bca1d88cc19a72c9a6e935/google/api_core/exceptions.py#L263, but RetryError
is its sibling class.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm so I guess I had an expectation that the retry
object would raise the last exception which would be one of the HTTP errors, but you are absolutely right that the RetryError
is being raised instead.
_fake_responses([200]), | ||
_fake_responses([200, 429, 200]), | ||
_fake_responses([200]), | ||
_fake_responses([200]), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you also add a test case to cover when a retry error (e.g. timeout, or unknown errors during retrying) happens?
I added some comments regarding the situation when retrying fails, but overall the code looks great to me. We should mention this improvement in our CHANGES.md. Could you add a line under "New Features / Improvements" in the next release (2.63.0)? Thanks! |
The `RetryError` would be always raised since the retry decorator would catch all HTTP-related exceptions.
remind me after tests pass |
Ok - I'll remind @sadovnychyi after tests pass |
A transient error might occur when writing a lot of shards to GCS, and right now the GCS IO does not have any retry logic in place:
https://github.com/apache/beam/blob/a06454a2/sdks/python/apache_beam/io/gcp/gcsio.py#L269
It means that in such cases the entire bundle of elements fails, and then Beam itself will attempt to retry the entire bundle, and will fail the job if it exceeds the number of retries.
This change adds new logic to retry only failed requests, and uses the typical exponential backoff strategy.
Note that this change accesses a private method (
_predicate
) of the retry object, which we could avoid by basically copying the logic over here. But existing code already accesses_responses
property so maybe it's not a big deal.https://github.com/apache/beam/blob/b4c3a4ff/sdks/python/apache_beam/io/gcp/gcsio.py#L297
Existing (unresolved) issue in the GCS client library:
googleapis/python-storage#1277
Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
addresses #123
), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, commentfixes #<ISSUE NUMBER>
instead.CHANGES.md
with noteworthy changes.See the Contributor Guide for more tips on how to make review process smoother.
To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md
GitHub Actions Tests Status (on master branch)
See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.