-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
constantly see "updating Ingress status" in controller logs without anything change in the cluster #10972
Comments
This issue is currently awaiting triage. If Ingress contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/remove-kind bug
|
/triage needs-information |
thanks for the reply @longwuyuan ,
Can you please a bit more specific what is missing ? Also which part is hard to read? I would like to update the description but not sure if i understand what is missing
I have already attached controller logs in first section. Please let me know if those are not sufficient or different debug level is needed |
I added also the describe of the following two:
only think is missing now from the template is a list of all ingresses, let me know if needed so i can share in private. We have around 1000 ingresses |
if i "watch" the output of
any idea what might trigger the |
one more observation, from the logs i see that
|
i am reading and will comment shortly but you can see that the output of |
-ping me on K8S slack and send me a zoom session id .. there is much to comment on
|
ok, as per zoom session, you have 2 controller instances installed in the same namespace (same cluster) but you have only one ingress class. That is not going to work because both controllers will process all the ingresses . You got the docs link for how to install multiple instances of controller in same cluster so please follow that and also please update the issue description, as per our discussion regarding the questions in the new issue template. thnx |
Thanks for the help @longwuyuan , the issue indeed was not there anymore if were using a single controller. https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/ |
What happened:
After upgrading
ingress-nginx
from1.4.0
to1.8.5
we noticed a lot of those messages in the logs (no debug):We have dedicated ingress-node. The map
newValue
are the IPs of the ingress nodes, and the mapcurrentValue
the IPs of the worker nodes.What you expected to happen:
I do not expect to see this in the logs since nothing is changing. This very likely is triggering constantly a reload of the
ingress-nginx
since new configuration is detected.NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):
Kubernetes version (use
kubectl version
):Environment:
uname -a
):We use terraform and this module to deploy the cluster
How to reproduce this issue:
updating Ingress status
The text was updated successfully, but these errors were encountered: