-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ci: optimize file syncing with s3 #381
ci: optimize file syncing with s3 #381
Conversation
@pedrominatel PTAL |
3e932a0
to
4ae4436
Compare
4ae4436
to
ef3655c
Compare
AWS_S3_BUCKET: ${{ secrets.PREVIEW_AWS_BUCKET_NAME }} | ||
AWS_REGION: ${{ secrets.AWS_REGION }} | ||
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} | ||
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} | ||
|
||
jobs: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we add the check to run the jobs only for PRs created under Espressif namespace?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Strictly speaking, there is no need for this limitation. The workflow won't work anyway, because the secrets with our values won't be available in the context of the forked repo.
The absence of this limitation also makes it easier to test changes to this workflow in the testing environment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, but we will always have failed jobs on forked repos, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess this workflow won't be even triggered, so, technically, there shouldn't be any failed jobs reported in forks.
Thank you for your replies, LGTM! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks @f-hollow.
Description
This PR adds a workflow that compares checksums for the files described below and syncs the diffs:
public/
files generated in CI for the latest commitpublic/
directory already deployed to the S3 bucketThe first preview deployment takes up the same about of time. The subsequent deployments reduced from current 3 minutes to roughly 1 minute.
Issue to be solved
The aws s3 sync command compares files according to their timestamp and size. This syncing approach doesn't work well with CI which generates new files during each and every build. The AWS issue #9074 explains it all. There are numerous threads where users discuss this and suggest that the sync should be based on the file checksums.
Apparently, AWS has known about this issue for quite some time, but there haven't been any visible actions form their side. One common workaround suggested by users is to use the sync option
--size-only
, however, others warn that it might be a bad idea.Resources
aws s3 sync
to sync modified files that still have the same sizeFollow-up actions
The initial deploy time can also be reduced if we implement object redirects for the files that are not usually changed, which means it is most of the files in our case: previous articles, their media, etc.
Related
Testing
Thorough testing has been done here.
Checklist
Before submitting a Pull Request, please ensure the following: