-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Admission webhook endpoint is called from other ingressClass #10985
Comments
This issue is currently awaiting triage. If Ingress contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/remove-kind bug
/triage needs-information |
/label bug It's a non-custom vanilla installation as chart dependency and the bug is obvious |
/assign @Gacko |
Is the ingress-nginx controller chart a sub-chart of your chart release named as |
Yes it is. |
thanks
|
Also, highly significant in this kind of use-case, the questions that are asked in a new bug-report for this project cover several critical aspects that a reader needs to make useful comments based on data. Hence if you read those questions of a new bug-report template and then edit this issue-description with the answers to those questions, it will be very useful to readers |
Thanks for guidance. Our usecase is happening in our testenvironment. There our product is tested within an Azure AKS and a scaling agentpool which could scale from 0-40. On each of this nodes multiple tests could be executed in paralell. In front of our product we use an ingress to terminate SSL. extenaldns regognizes new ingress rules and creates private dns entries. With these a test could be started. The ingress talks http with our application. I tried also to use just one Ingress but there the problem is that an ingress rule change forces a nginx restart. So one test runs and one of the next starting destroys the other. Creating all needed Ingress rules before one test starts doesn't feel good but would be possible. On the other hand each test should be isolated as possible so One-Test&One-IngressController should be better. My workaround is now to disable the webhooks admission with During my test I had 4 of 8 Tests starting well in the beginning. Then some tests began to fail due to the wrong endpoint usage. Multiple vanilla installations would be very tasteful but not different from having multiple subchart deployments. Here is also our ingress rule template file because I missed to add it initially:
That's all I have for now. Please tell me if you need more. |
/retitle no endpoints available error on subchart use-case |
on a different note, you use of words other ingressClass implies you may have multiple instances of the ingress-nginx controller in one cluster. that is a yet another whole different ballgame. but because the questions asked in a new bug-report template have not been answered, we can not be certain (because one of the questions in the template ask for related information) |
|
Marco is going to look deeper into this; with that said, if I'm reading this correctly, it should validate the ingress class. The Admission control code is simple, the HandleAdmission function should validate the ingress class that matches it in this function The way it looks is that all ingress controllers would need to be available; that may be the bug. |
Hey! I've already seen this issue in the past and I sadly have to say everything is working as designed here. It might not be perfect, but it is the way Kubernetes and especially webhooks work. Our chart comes with a validating webhook configuration. This configuration tells the Kubernetes API server to reach out to a service whenever something happens to the resource types specified in the configuration. One can add multiple webhook configurations dealing with the same resources. This is what happens here: You have installed the Ingress NGINX at least twice and each of the installations comes with a webhook configuration. So the API server is told to reach out to the both of them whenever something happens with Ingress resources. The webhook of Ingress NGINX is handled by the controller itself, so each Ingress NGINX controller pod is an eligible endpoint for the webhook the API server is calling. Unfortunately the Kubernetes webhook configuration API is not aware of resource specific information, so it can not be limited on something inside the The only way to limit if a webhook is called for a specific resource are the Luckily the Ingress NGINX is checking if the As you might tell now, this either requires running at least one endpoint per webhook service all the time or removing the webhook configuration if there are not endpoints available. As this is nothing we can change or improve in the Ingress NGINX project and things work as designed, I'm closing this issue for now. I already came across this with one of the customers of the company I'm working for when I was actively supporting the OSS Ingress NGINX for that company and we sadly weren't able to find a better solution there. Feel free to ask further questions! Kind regards |
/close |
@Gacko: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What happened:
I have some failing helm chart installations like:
So I try to install iste-run-tp221-ts4366 with ingressClass "ingress-class-iste-run-tp221-ts4366" and it fails due to another ingress with class "ingress-class-iste-run-tp221-ts4368".
Here is the configuration:
HELM_JOB_NAME could be e.g. "iste-run-tp221-ts4366" or "iste-run-tp221-ts4368", ...
Our Ingress backend values.yaml:
What you expected to happen:
One ingress shouldn't call webhooks from the other if these are differentiated by an ingressClass
NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):
version: 4.9.0
repository: https://kubernetes.github.io/ingress-nginx
condition: ingress-nginx.enabled
Pod image: registry.k8s.io/ingress-nginx/controller:v1.9.5@sha256:b3aba22b1da80e7acfc52b115cae1d4c687172cbf2b742d5b502419c25ff340e
Kubernetes version (use
kubectl version
):Client Version: v1.28.3
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.27.3
Environment:
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.18.5
PRETTY_NAME="Alpine Linux v3.18"
HOME_URL="https://alpinelinux.org/"
BUG_REPORT_URL="https://gitlab.alpinelinux.org/alpine/aports/-/issues"
uname -a
):Linux -8567f45f9c-lnxjb 5.15.0-1049-azure Clarify port clash failure mode #56-Ubuntu SMP Wed Sep 20 12:34:34 UTC 2023 x86_64 Linux
The text was updated successfully, but these errors were encountered: