-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ingress controller processing ingresses with different ingressClassName when multiple ingress controllers are deployed in the same namespace - AWS EKS 1.27 #10907
Comments
This issue is currently awaiting triage. If Ingress contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/remove-kind bug
|
/triage needs-information |
Sorry I posted by mistake before completing the form, please let me know if there's anything else I need to add |
Hi, I can see the |
|
Thanks for your reply
I'm sure the rules aren't used for routing although I have a large number of ingresses that get pointlessly evaluated causing an increase in load for 1 of the 3 pods in the deployment which lead to restarts. (every time there is a batch of "Ignoring ingress" errors in the logs one of the pod restarts)
I have followed that guide and double checked the configuration multiple times
that's confirmed, the 2 ingress controllers only process their own ingress rules, the issue is the "Ignoring ingress" errors |
thanks for your effort, I believe the restarts are due to the number of ingress resources being evaluated, I have a similar setup in 3 separate regions: region 1: region 2: region 3: could you please re-run your test with a higher number of ingresses? I'm not sure why there is no correlation between the number of ingress resources and the number of restarts, I will try and look at the traffic in region 2 vs region 1 |
|
This is stale, but we won't close it automatically, just bare in mind the maintainers may be busy with other tasks and will reach your issue ASAP. If you have any question or request to prioritize this, please reach |
@mdellavedova were you able to resolve this issue? Can you provide logs from the controller pod that restarts? Is there resource issues or errors from the kubelet/api server that show why it would be restarting? Do you have limits on the pods that are being hit? |
What happened:
I have 2 ingress controllers deployed in the same namespace, set up following the instructions in these documents:
https://kubernetes.github.io/ingress-nginx/user-guide/k8s-122-migration/#i-cant-use-multiple-namespaces-what-should-i-do and https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/#multiple-ingress-controllers
The ingresses work as expected but when I look at the logs for one ingress controller I can see multiple errors:
suggesting that the ingress controller is considering ingresses that belong to the other ingress controller and vice-versa. This creates a high load on (one of) the ingress controller's pods causing it to restart.
What you expected to happen:
I would expect both ingress controllers to ignore ingresses which don't have their associated
ingressClassName
NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):
NGINX Ingress controller
Release: v1.8.1
Build: dc88dce
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.21.6
I have also tried the latest available helm chart, which didn't help
NGINX Ingress controller
Release: v1.9.5
Build: f503c4b
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.21.6
Kubernetes version (use
kubectl version
):Environment:
uname -a
):Linux ip-10-229-145-39.eu-west-1.compute.internal 5.10.198-187.748.amzn2.x86_64 #1 SMP Tue Oct 24 19:49:54 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Please mention how/where was the cluster created like kubeadm/kops/minikube/kind etc.
kubectl version
kubectl get nodes -o wide
How was the ingress-nginx-controller installed:
helm ls -A | grep -i ingress
helm -n <ingresscontrollernamespace> get values <helmreleasename>
the ingress controller(s) were installed using ArgoCD (which in terms uses helm). Helm values below:
nginx-public-nlb-tls
ingress-controller-internal-nginx:
If helm was not used, then copy/paste the complete precise command used to install the controller, along with the flags and options used
if you have more than one instance of the ingress-nginx-controller installed in the same cluster, please provide details for all the instances
Current State of the controller:
kubectl describe ingressclasses
kubectl -n <ingresscontrollernamespace> get all -A -o wide
kubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>
kubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>
kubectl -n <appnamespace> get all,ing -o wide
kubectl -n <appnamespace> describe ing <ingressname>
If applicable, then, your complete and exact curl/grpcurl command (redacted if required) and the reponse to the curl/grpcurl command with the -v flag
Others:
kubectl describe ...
of any custom configmap(s) created and in useHow to reproduce this issue:
Anything else we need to know:
The text was updated successfully, but these errors were encountered: