-
Notifications
You must be signed in to change notification settings - Fork 334
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Recover workloads using local PVs from data loss in the cloud #202
Comments
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
@cofyc any further work past the initial proposal? |
@cdenneen: Please ensure the request meets the requirements listed here. If this request no longer meets these requirements, the label can be removed In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Wouldn't it already be enough as a first approximation to simply respect Just like pods get evicted after If that timeout is good enough to try and evict the pod; then it's good enough to try and evict the PVC; is it not? It does not handle the cases of node reboots losing data (e.g. in GKE and EKS) but it does handle the case of "Node Edit: Ah wait this will not work; as kubernetes doesn't delete pods if the node isn't reachable. Only does so when the In that case; I guess a first approximation that works both on cloud and on-prem is to auto-delete PVs that are bound to nodes that do not exist anymore |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
/remove-lifecycle stale |
Exactly! |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
Update! We will be having a summer intern doing some work on this. We'll post more updates here as the project comes together. We may not organize kubernetes-sigs membership for the intern so the actual PR upstream may be indirected through another member of my team. Just fair warning. |
This is the track issue for some related problems in the cloud when the node is permanently deleted.
I'm thinking about writing a cloud controller to automate the recovery process. Here is the proposal: https://docs.google.com/document/d/1SA9epEwA3jPwibRV0ccQwJ2UfZXoeUYKyNxNegt0vn4
related issues:
The text was updated successfully, but these errors were encountered: