-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
All ingress controllers restart at the same time on configuration change #11083
Comments
This issue is currently awaiting triage. If Ingress contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I think if you post the outputs of the commands that show the real installed state, it will help. Also include logs. The new bug report template asks for questions so its best if you see a new bug report template and then edit the issue description here to answer those questions |
Thanks @longwuyuan, I've updated the issue description accordingly :) |
thanks. I was hoping to see logs and
|
my config for controller
I created a deployment and then an ingress for it
I was watching evnets and i saw the reload after adding new ingress. I was also watching pods as well as tailing controller pod logs and unfortunately I did NOT see a restart We also don't have many others reporting this. So need to rule out if something specific to your use case or your environ is causing the restart of pods /remove-kind bug /triage needs-information |
Thanks a lot, your last command actually helped me identify the issue! Watching the events I noticed that the controller gets I resolved this by limiting the number of worker processes to 8 (the node has 8 CPU-Cores plus Hyper Threading, so it defaulted to 16 worker processes) and raising the memory limit from 400Mi to 2Gi for these reload cases. Again, thanks for being so helpful and sorry for potentially wasting some of your time :) |
@DASPRiD sorry not accepted :-) LOL (kidding) Really, glad to get feedack and info. Without feedback its hard to know reality. So thank you . Glad problem solved. seeing events often provides insight, like this one. Have a great day |
What happened:
I have a ingress-nginx installed in a single-node cluster as Deployment kind with a replicaCount set to 2 and a rolling update strategy with maxUnavailable set to 1.
Whenever the configuration is updated (TLS entries added for instance), all controller pods emit
NGINX reload triggered due to a change in configuration
at the same time, and all of them restart. This takes between 5 to 10 seconds, during which no HTTP traffic is possible.What you expected to happen:
Pods should restart one by one, so that the service always has at least one pod to talk to.
NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):
Kubernetes version (use
kubectl version
): v1.29.1+k0sHow was the ingress-nginx-controller installed:
helm ls -A | grep -i ingress
Current State of the controller:
kubectl describe ingressclasses
kubectl -n ingress-nginx get all -o wide
kubectl -n ingress-nginx describe po
kubectl -n ingress-nginx describe svc
How to reproduce this issue:
Add a new resource Ingress resource with a TLS setting (I'm using Cert Manager to automatically issue a certificate for it).
Anything else we need to know:
Helm template
The text was updated successfully, but these errors were encountered: