You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now, stash/restic is configured with BackBlaze as a direct storage backend. This is fine expect for the additional use of prune: true. This causes excessive download bandwith since restic downloads the entire repo for repacking after the prune. That bandwidth is (at least) 4x as expensive as the actual storage. More details in this thread.
Some ideas:
Back up to a local target with prune and simply replicate (i.e. 1:1 sync) this to BackBlaze after
Pruning is intended to remove data from the bucket that is not referenced by any snapshot. Considering that the backups here are append-only for the most part (databases, long-term storage of photos, documents, ...), running restic forget without --prune or a separate --prune schedule wit significantly reduced frequency should be fine. This would potentially result in the bucket size being larger as they then contain files that are not part of any active backup/snapshot. But again: The storage cost is only a small fraction of the data transfer cost.
The text was updated successfully, but these errors were encountered:
See #97. Turns out it's not bandwidth that is costly right now, but
API calls to 'download file by name'... probably due to 'restic check'
without cache?!
See #97. Turns out it's not bandwidth that is costly right now, but
API calls to 'download file by name'... probably due to 'restic check'
without cache?!
Right now, stash/restic is configured with BackBlaze as a direct storage backend. This is fine expect for the additional use of
prune: true
. This causes excessive download bandwith since restic downloads the entire repo for repacking after the prune. That bandwidth is (at least) 4x as expensive as the actual storage. More details in this thread.Some ideas:
restic forget
without--prune
or a separate--prune
schedule wit significantly reduced frequency should be fine. This would potentially result in the bucket size being larger as they then contain files that are not part of any active backup/snapshot. But again: The storage cost is only a small fraction of the data transfer cost.The text was updated successfully, but these errors were encountered: