-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ingress nginx controller changed load balancer when updating managed nodegroups AWS EKS #11116
Comments
This issue is currently awaiting triage. If Ingress contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/remove-kind bug
|
/triage needs-information |
/assign @strongjz |
This is stale, but we won't close it automatically, just bare in mind the maintainers may be busy with other tasks and will reach your issue ASAP. If you have any question or request to prioritize this, please reach |
/close |
@longwuyuan: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What happened:
Hi, firstly I'm not sure if this is a bug or not, but I would very much appreciate help in understanding the behaviour.
In the process of updating node images we:
On deleting the old managed nodegroup the target groups changed to a different load balancer. This caused an outage of our services until we could diagnose the dns change.
What you expected to happen:
We would not expect a change in node, or node image version to trigger a change in the ingress service load balancer.
We have performed similar operations many times and not experienced this behaviour
We opened a ticker with AWS support and spoke to the EKS team, they were able to confirm that the eks controller had switched the targets.
The best explanation they could come up with was that we have 2 load balancers with the same name (different full dns names)
is it possible that this could be a reason for the change?
NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):
Kubernetes version (use
kubectl version
):Server Version: v1.27.9-eks-5e0fdde
Environment:
Cloud provider or hardware configuration:
AWS
OS (e.g. from /etc/os-release):
na
Kernel (e.g.
uname -a
):na
Install tools:
eksctl
Basic cluster related info:
kubectl version
:1.27
kubectl get nodes -o wide
How was the ingress-nginx-controller installed:
helm
ingress-nginx ingress-nginx 2 2023-10-31 09:03:19.593721 +0000 UTC deployed ingress-nginx-4.8.3 1.9.4
USER-SUPPLIED VALUES: null
Current State of the controller:
kubectl describe ingressclasses
Anything else we need to know:
As I said earlier we have seem to have 2 loadbalancers tagged for the ingress-nginx-controller service with the same name. I'm not sure of the mechanism but I feel like this is an important part of it
The text was updated successfully, but these errors were encountered: