-
Notifications
You must be signed in to change notification settings - Fork 298
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problem after update from k3s version v1.30.4+k3s1 to v1.31.0+k3s1 #1630
Comments
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten |
Are you sure that the CSIDriver is still registered? brandond@dev01:~$ kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k3s-server-1 Ready control-plane,master 69s v1.31.4+k3s1 172.17.0.4 <none> K3s v1.31.4+k3s1 6.8.0-1016-aws containerd://1.7.23-k3s2
brandond@dev01:~$ kubectl get csidriver
NAME ATTACHREQUIRED PODINFOONMOUNT STORAGECAPACITY TOKENREQUESTS REQUIRESREPUBLISH MODES AGE
secrets-store.csi.k8s.io false true false <unset> false Ephemeral 5s |
What steps did you take and what happened:
Problem with secrets-store-csi-driver after update k3s to v1.31.0+k3s1
before update to version: v1.31.0+k3s1 everything working excelent, after instantly had a tons of warnings:
nothing changes in volumes configuration and csi-driver configuration:
csidriver seems to be okay, I have a latest version.
secrets-store.csi.k8s.io false true false <unset> false Ephemeral
Delete pod with warning to recreate cause a pod stuck in init state because of error with mounting
after downgrade cluster to previous version: v1.30.4+k3s1 everything back to normal and working
What did you expect to happen:
Working as before with new k3s version
Anything else you would like to add:
Which provider are you using:
HashiCorp Vault
Environment:
The text was updated successfully, but these errors were encountered: