-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Incorrect handling of long URLs [draft] #11243
Comments
This issue is currently awaiting triage. If Ingress contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/remove-kind bug
/kind support |
Hello @longwuyuan !
|
ok. can you also describe why your URL is so long. |
@longwuyuan well, it's a very bad choice by the end app developers - they are passing an OpenID token via a URL parameter -.- |
ok, thank you for the info. explains the use case |
i am checking the nginx specs and HTTP specs. Maybe you can do the same. This project code will not set that limit for sure. cc @tao12345666333 @rikatz if you already know the spec limit for a HTTP len(URL) |
If you already have the complete error message from the controller logs, please copy/paste it here |
If it is this https://www.slingacademy.com/article/nginx-error-414-request-uri-too-large-causes-and-solutions/ then the recommended solution is this |
Also controller v1.1.1 is not supported anymore. Is that the real version of hte controller in use ? |
Hey @longwuyuan |
|
Also, still waiting on the exact and real complete error message lines |
Hello! So, sometimes the packet would arrive in full, and sometimes the packet would arrive in two fragments. The problem was with the server - in case of packet fragmenting, it received the first fragment and (because of a bug they have in the code) it started to interpret that half of the packet as the whole thing. The HTTP headers are actually located at the end of the packet, so they were in the second fragment. Clearly, the server failed to process the half of the packet correctly and just replied with 505 instead of waiting for the rest of it to arrive and process it in full. |
Thanks for the update. Your solutions sounds too anti-pattern to put a nginx webserver on node but since I am from outside, I would not know any better. |
Hey @longwuyuan |
thanks for updating. helps future readers. |
What happened:
I am getting intermittent HTTP errors when querying services via ingress, if the URL is longer than ~2000 symbols.
What you expected to happen:
Idempotency (no intermittent errors when querying the same stateless endpoint)
NGINX Ingress controller version:
v1.1.1
Kubernetes version (use
kubectl version
):Environment:
Cloud provider or hardware configuration: Baremetal
OS (e.g. from /etc/os-release):
Kernel (e.g.
uname -a
):Install tools:
Please mention how/where was the cluster created like kubeadm/kops/minikube/kind etc.
Basic cluster related info:
kubectl version
kubectl get nodes -o wide
How was the ingress-nginx-controller installed:
helm ls -A | grep -i ingress
helm -n <ingresscontrollernamespace> get values <helmreleasename>
Current State of the controller:
kubectl describe ingressclasses
kubectl -n <ingresscontrollernamespace> get all -A -o wide
kubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>
kubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>
Current state of ingress object, if applicable:
kubectl -n <appnamespace> get all,ing -o wide
kubectl -n <appnamespace> describe ing <ingressname>
Others:
kubectl describe ...
of any custom configmap(s) created and in useHow to reproduce this issue:
Do the same CURL GET request to a web service in k8s 30 times in a row. The URL has to be long (over 2000 characters). Observe the 200/error rate (in my case it's 505, but I suspect it might differ by application)
Anything else we need to know:
When I do the same CURL GET request to it 30 times in a row, ~10 times (30%) I get HTTP 505 error, and the other 20 times are 200 ok.
Some relevant info:
The text was updated successfully, but these errors were encountered: