-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Service cluster ip changed but ingress-controller not update the ip #10689
Comments
This issue is currently awaiting triage. If Ingress contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@mjhalxx we dont test gloo. I am using controller v1.9.4 and I use a simple pod using image nginx:alpine and this problem is not reproduced. /remove-kind bug you can try to post the answers to the questions asked in a new issue template. It will help to create some actionable comments. /triage needs-information |
@longwuyuan You don’t need to use gloo. You can use any service. Instead of accessing the pod directly, you access the service. If you delete the service and then create a service with the same name, problems will arise. not restart pod but delete the service |
@longwuyuan ingress-nginx usage scenarios are as follows,when srv ip change ingress-ngnix can not update: |
deleting a backend service and then recreating a replacement for that deleted service that was already configured in a ingress object is not a use-case that is tested in the CI. It is better to use supported well planned workflow and use-cases for example you create a app and then you create a service for the app of type ClusterIP. Then you expose the app 's service with a ingress. Later, for any future maintenance of the app, just change image in the app pod |
@longwuyuan yes,but sometime backend service has some problem,we fix it,and reinstall it,then we found it can not reach again, and we have to restart ingress-nginx to resolve it |
that is not normal. most people change the image in the app. if you need to keep changing the service of type CLusterIP or other K8S objects to fix an app, then the problem is outside ingress controller. what do you mean reinstall ? Do you delete the deployment/statefulset/daemonset ? and recreate it ? |
@longwuyuan we use gloo, it use helm to install and uninstall,then we upgrade gloo, it will reinstall it |
I don't know what gloo is but do you know if it deletes the service type ClusterIP ? If it does then this is bad design. You should find a way to only change the image in the pod and any other pod specs instead of deleting the service of type ClusterIP. |
@mjhalxx location /apis { Internally ingress-nginx uses lua, so if we create ingress resource, we typically see below rendered in nginx.conf
So more information will help on how you are creating ingress resource. |
@bmv126 The actual configuration is as you said, but some parts are omitted and the main configuration is retained. What really confuses me is that the upstream here is the service domain name instead of the IP directly used. Why can't the service IP be updated in real time after it changes? , and it needs to be restarted before it can be updated. |
What happened:
k -n network logs -f ingress-nginx-controoler-75dbd7bdd7-tr76b [xxxx] "GET /apis/xxxx" 504 xxxx 10.43.239.132:80 0 60.001 504
What you expected to happen:
NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):
Kubernetes version (use
kubectl version
):Environment:
Cloud provider or hardware configuration:
OS (e.g. from /etc/os-release):
Kernel (e.g.
uname -a
):Install tools:
Please mention how/where was the cluster created like kubeadm/kops/minikube/kind etc.
Basic cluster related info:
kubectl version
kubectl get nodes -o wide
How was the ingress-nginx-controller installed:
helm ls -A | grep -i ingress
helm -n <ingresscontrollernamepspace> get values <helmreleasename>
Current State of the controller:
kubectl describe ingressclasses
kubectl -n <ingresscontrollernamespace> get all -A -o wide
kubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>
kubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>
Current state of ingress object, if applicable:
kubectl -n <appnnamespace> get all,ing -o wide
kubectl -n <appnamespace> describe ing <ingressname>
Others:
kubectl describe ...
of any custom configmap(s) created and in useHow to reproduce this issue:
Anything else we need to know:
The text was updated successfully, but these errors were encountered: