-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to maintain client IP with SSL Passthrough in Ingress NGINX Controller #10706
Comments
This issue is currently awaiting triage. If Ingress contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/remove-kind bug
|
What cloud loadblanacer are you using? That determines which method to use for get the client IP. L4 proxy protocol |
/triage needs-information |
We have very similar configuration and the same issue, in the end in logs we see ingress IP and not the client IP. Our configuration looks like this: The issue is that in Apache logs we see only Ingress IP as the one from the Client, and in the end the same IP is visable in our application that runs at the end. Here are some configs we have: Ingress-nginx values.yaml for helm:
but I also tested following entries:
none of those works in ingress we have added following annotations:
We need to set it up like this because in the end SSL certificate need to be applied on Apache level because one of modules for Apache that we use can't work without SSL. I tested multiple configurations and in the end I can't see client IP in any logs. I even did a tcpdump on Apache to see what comes in the header but there is no other IP then the one from Ingress. Other issue is that in ingress logs with above configuration we don't see any information about accessing our application web page :-) probably because we use that ssl passthrough but that's something for a different bug. Any help would be welcome, does anyone has some similar configuration and was able to pass the client IP? I'll just add that if we bypass ingress and connect Apache service directly to LoadBalancer we see the client IP in logs and all works ok, so the issue is somewhere in nginx, it's either a configuration issue that we're missing or some bug with nginx not being able to pass that IP properly |
@mesiu84 remove the backend-protocol annotation as I am not sure it helps when there is no backend but a direct passthrough. Also your post does not say which cloud. which cloud is it ? |
And also add the |
@longwuyuan I can remove backend-protocol but it doesn't change anything so not a big change for me, it's here mostly to be sure that in the end https would be used. But adding that annotation you pointed out doesn't change anything, I still see ingress IP in apache logs :-( |
@mesiu84 my test works so to test your environ ;
|
the question about which cloud has been asked 2 times but answer has not been posted even 1 time. so its not possible to know if you have proxy-protocol enabled on the cloud/infra LB. For retaining the client ip, enabling proxy-protocol is required on both the controller and also on the LB https://kubernetes.github.io/ingress-nginx/user-guide/miscellaneous/#proxy-protocol |
[DONE]
[DONE]
UpCloud
|
if you need here is a config for UpCloud LoadBalancer, it's from their official documentation and works perfectly
but there is nothing special here |
thnaks.
|
I'm not using curl, I'm opening a page in a browser and checking the header, but here you have an output from curl
my actual IP is 91.150.174.50 but it's not static and will change after restart/reconnection I don't have any logs from nginx, as I already pointed that out. It's probably related to that passthrough option, I think I saw a bug about it but can't find it now |
thanks for the info.
post this information so it becomes simple to debug. Come talk on slack if you can not provide all the requested info properly in one post. There are not many resources here so can not keep back & forth for just getting the basic debug info from you |
How can I contact you on Slack? |
@longwuyuan
no IP :-(
but that will change in a moment since I need to switch to a different place and have to disconnect :-) |
|
|
Describe the bug:
I'm currently hosting an ASP.NET application in a Kubernetes environment, using Ingress Nginx Controller (deployed via an official helm chart) for external accessing purposes. My application authorizes requests via client-provided certificates. When these certificates on the server-side have any validation issues, the application should return custom error codes and messages. If the client IP is not in the allowed pool, a 403 HTTP error should be returned.
However, when a request with an expired certificate is sent, the expected 403 HTTP error is not returned. Instead, NGINX validates the certificate and returns a 400 HTTP error due to issues #8229 and openssl/openssl#14036.
To address this, I enabled ssl-passthrough on the controller and on the Ingress rule for the service and added extra validation on the application itself. But, unfortunately, when ssl-passthrough is enabled on Ingress, the client IP is rewritten as
::ffff:<INTERNAL_IPV4_OF_INGRESS_POD>
.A similar issue was reported here #8052 where the client IP is always 127.0.0.1, but the solution provided (setting
enable-real-ip: "true"
andforwarded-for-header: proxy_protocol
in Nginx Ingress Controller ConfigMap) did not seem to solve my issue.I know that ssl-passthrough is working on L4, but there is an another workaround to provide the client IP to the application (to
HttpContext.Request.RemoteIpAddress
orX-Forwarded-For
orX-Real-IP
headers)?What you expected to happen:
I am hoping for a workaround to preserve the client IP in the
HttpContext.Request.RemoteIpAddress
orX-Forwarded-For
orX-Real-IP
headers even when ssl-passthrough mode is enabled.NGINX Ingress controller version:
NGINX Ingress controller
Release: v1.6.4
Build: 69e8833
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.21.6
Kubernetes version (use
kubectl version
):Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.4", GitCommit:"872a965c6c6526caa949f0c6ac028ef7aff3fb78", GitTreeState:"clean", BuildDate:"2022-11-09T13:36:36Z", GoVersion:"go1.19.3", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.14", GitCommit:"3321ffc07d2f046afdf613796f9032f4460de093", GitTreeState:"clean", BuildDate:"2022-11-09T13:32:47Z", GoVersion:"go1.17.13", Compiler:"gc", Platform:"linux/amd64"}
public-ingress-nginx ingress-nginx-test 4 2023-11-30 13:35:05.882801305 +0000 UK deployed ingress-nginx-4.5.2 1.6.4
Helm values:
I am including relevant sections of the helm chart values, the start of the Ingress rule file, and the application HttpMiddleware logs for more context.
Helm chart values config section:
Start of ingress rule:
Application logs of headers and Client IP:
How to reproduce this issue:
Anything else we need to know:
Kindly help me figure out what I might be missing here. How can I ensure that the client IP is preserved as expected?
The text was updated successfully, but these errors were encountered: