Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to maintain client IP with SSL Passthrough in Ingress NGINX Controller #10706

Open
yoyrandao opened this issue Nov 30, 2023 · 21 comments
Open
Assignees
Labels
needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@yoyrandao
Copy link

yoyrandao commented Nov 30, 2023

Describe the bug:

I'm currently hosting an ASP.NET application in a Kubernetes environment, using Ingress Nginx Controller (deployed via an official helm chart) for external accessing purposes. My application authorizes requests via client-provided certificates. When these certificates on the server-side have any validation issues, the application should return custom error codes and messages. If the client IP is not in the allowed pool, a 403 HTTP error should be returned.

However, when a request with an expired certificate is sent, the expected 403 HTTP error is not returned. Instead, NGINX validates the certificate and returns a 400 HTTP error due to issues #8229 and openssl/openssl#14036.

To address this, I enabled ssl-passthrough on the controller and on the Ingress rule for the service and added extra validation on the application itself. But, unfortunately, when ssl-passthrough is enabled on Ingress, the client IP is rewritten as ::ffff:<INTERNAL_IPV4_OF_INGRESS_POD>.

A similar issue was reported here #8052 where the client IP is always 127.0.0.1, but the solution provided (setting enable-real-ip: "true" and forwarded-for-header: proxy_protocol in Nginx Ingress Controller ConfigMap) did not seem to solve my issue.

I know that ssl-passthrough is working on L4, but there is an another workaround to provide the client IP to the application (to HttpContext.Request.RemoteIpAddress or X-Forwarded-For or X-Real-IP headers)?

What you expected to happen:

I am hoping for a workaround to preserve the client IP in the HttpContext.Request.RemoteIpAddress or X-Forwarded-For or X-Real-IP headers even when ssl-passthrough mode is enabled.

NGINX Ingress controller version:
NGINX Ingress controller
Release: v1.6.4
Build: 69e8833
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.21.6

Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.4", GitCommit:"872a965c6c6526caa949f0c6ac028ef7aff3fb78", GitTreeState:"clean", BuildDate:"2022-11-09T13:36:36Z", GoVersion:"go1.19.3", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.14", GitCommit:"3321ffc07d2f046afdf613796f9032f4460de093", GitTreeState:"clean", BuildDate:"2022-11-09T13:32:47Z", GoVersion:"go1.17.13", Compiler:"gc", Platform:"linux/amd64"}

  • How was the ingress-nginx-controller installed:
    public-ingress-nginx ingress-nginx-test 4 2023-11-30 13:35:05.882801305 +0000 UK deployed ingress-nginx-4.5.2 1.6.4

Helm values:

USER-SUPPLIED VALUES:
commonLabels: {}
controller:
  admissionWebhooks:
    annotations: {}
    certManager:
      enabled: false
    certificate: /usr/local/certificates/cert
    createSecretJob:
      resources:
        limits:
          cpu: 10m
          memory: 20Mi
        requests:
          cpu: 10m
          memory: 20Mi
      securityContext:
        allowPrivilegeEscalation: false
    enabled: true
    existingPsp: ""
    failurePolicy: Fail
    key: /usr/local/certificates/key
    namespaceSelector: {}
    networkPolicyEnabled: false
    objectSelector: {}
    patch:
      enabled: true
      image:
        digest: ""
        image: ingress-nginx/kube-webhook-certgen
        pullPolicy: IfNotPresent
        registry: registry.k8s.io
        tag: v20220916-gd32f8c343
      labels: {}
      nodeSelector:
        kubernetes.io/os: linux
      podAnnotations: {}
      priorityClassName: ""
      securityContext:
        fsGroup: 2000
        runAsNonRoot: true
        runAsUser: 2000
      tolerations: []
    patchWebhookJob:
      resources: {}
      securityContext:
        allowPrivilegeEscalation: false
    port: 8443
    service:
      annotations: {}
      externalIPs: []
      loadBalancerSourceRanges: []
      servicePort: 443
      type: ClusterIP
  allowSnippetAnnotations: true
  autoscaling:
    apiVersion: autoscaling/v2
    behavior: {}
    enabled: true
    maxReplicas: 5
    minReplicas: 1
    targetCPUUtilizationPercentage: 80
    targetMemoryUtilizationPercentage: 80
  autoscalingTemplate: []
  config:
    allow-snippet-annotations: "true"
    enable-real-ip: "true"
    forwarded-for-header: proxy_protocol
    gzip-min-length: "1"
    gzip-types: text/plain application/json application/xml
    use-gzip: "true"
  configMapNamespace: ""
  containerName: controller
  containerPort:
    http: 80
    https: 443
  customTemplate:
    configMapKey: ""
    configMapName: ""
  dnsPolicy: ClusterFirst
  electionID: ""
  enableTopologyAwareRouting: false
  existingPsp: ""
  extraArgs:
    enable-ssl-passthrough: ""
  extraEnvs: []
  healthCheckPath: /healthz
  hostNetwork: false
  hostPort:
    enabled: false
    ports:
      http: 80
      https: 443
  image:
    allowPrivilegeEscalation: true
    chroot: false
    digest: ""
    image: ingress-nginx/controller
    pullPolicy: IfNotPresent
    registry: registry.k8s.io
    runAsUser: 101
    tag: v1.6.4
  ingressClass: nginx
  ingressClassByName: false
  ingressClassResource:
    controllerValue: k8s.io/ingress-nginx
    default: false
    enabled: true
    name: public-ingress-nginx
    parameters: {}
  keda:
    enabled: false
  kind: DaemonSet
  livenessProbe:
    failureThreshold: 5
    httpGet:
      path: /healthz
      port: 10254
      scheme: HTTP
    initialDelaySeconds: 10
    periodSeconds: 10
    successThreshold: 1
    timeoutSeconds: 1
  maxmindLicenseKey: ""
  metrics:
    enabled: true
    port: 10254
    prometheusRule:
      additionalLabels: {}
      enabled: false
      rules: []
    service:
      annotations: {}
      externalIPs: []
      loadBalancerSourceRanges: []
      servicePort: 10254
      type: ClusterIP
    serviceMonitor:
      enabled: false
  minAvailable: 1
  name: controller
  nodeSelector:
    kubernetes.io/os: linux
  opentelemetry:
    containerSecurityContext:
      allowPrivilegeEscalation: false
    enabled: false
    image: registry.k8s.io/ingress-nginx/opentelemetry:v20230107-helm-chart-4.4.2-2-g96b3d2165@sha256:331b9bebd6acfcd2d3048abbdd86555f5be76b7e3d0b5af4300b04235c6056c9
  publishService:
    enabled: true
    pathOverride: ""
  readinessProbe:
    failureThreshold: 3
    httpGet:
      path: /healthz
      port: 10254
      scheme: HTTP
    initialDelaySeconds: 10
    periodSeconds: 10
    successThreshold: 1
    timeoutSeconds: 1
  replicaCount: 1
  reportNodeInternalIp: false
  resources:
    requests:
      cpu: 300m
      memory: 220Mi
  scope:
    enabled: false
  service:
    annotations: {}
    appProtocol: true
    enableHttp: false
    enableHttps: true
    enabled: true
    external:
      enabled: true
    externalIPs: []
    externalTrafficPolicy: Local
    internal:
      enabled: false
    ipFamilies:
    - IPv4
    ipFamilyPolicy: SingleStack
    labels: {}
    loadBalancerIP: ""
    nodePorts:
      http: ""
      https: ""
      tcp: {}
      udp: {}
    ports:
      https: 443
    targetPorts:
      http: http
      https: https
    type: LoadBalancer
  tcp:
    configMapNamespace: ""
  tolerations: []
  udp:
    annotations: {}
    configMapNamespace: ""
  updateStrategy: {}
  watchIngressWithoutClass: false
defaultBackend:
  enabled: false
podSecurityPolicy:
  enabled: false
rbac:
  create: true
  scope: false
serviceAccount:
  annotations: {}
  automountServiceAccountToken: true
  create: true
  name: ""
  • Current State of the controller:
Name:         public-ingress-nginx
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/instance=public-ingress-nginx
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=ingress-nginx
              app.kubernetes.io/part-of=ingress-nginx
              app.kubernetes.io/version=1.6.4
              helm.sh/chart=ingress-nginx-4.5.2
Annotations:  meta.helm.sh/release-name: public-ingress-nginx
              meta.helm.sh/release-namespace: ingress-nginx-test
Controller:   k8s.io/ingress-nginx
Events:       <none>
  • Additional context:

I am including relevant sections of the helm chart values, the start of the Ingress rule file, and the application HttpMiddleware logs for more context.

Helm chart values config section:

config:
    use-gzip: "true"
    gzip-types:
        text/plain
        application/json
        application/xml
    gzip-min-length: "1"
    allow-snippet-annotations: "true"
    enable-real-ip: "true"
    forwarded-for-header: proxy_protocol

Start of ingress rule:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "true"
    nginx.ingress.kubernetes.io/backend-protocol: HTTPS
    nginx.ingress.kubernetes.io/enable-rewrite-log: "true"
    nginx.ingress.kubernetes.io/ssl-passthrough: "true"
    nginx.ingress.kubernetes.io/use-forwarded-headers: "true"

Application logs of headers and Client IP:

[
  {
    "@t": "2023-11-30T15:07:57.6371371Z",
    "@m": "X-Forwarded-For: ''",
    "@i": "e6231517",
    "@l": "Warning",
    "SourceContext": "Program",
    "RequestId": "0HMVHMEUBTND3:00000001",
    "RequestPath": "/api",
    "ConnectionId": "0HMVHMEUBTND3",
    "Scope": [
      "Start processing the request with trace: 0HMVHMEUBTND3:00000001"
    ]
  },
  {
    "@t": "2023-11-30T15:07:57.6371881Z",
    "@m": "Connection:RemoteIpAddress: '::ffff:172.17.4.148'",
    "@i": "05dda92f",
    "@l": "Warning",
    "SourceContext": "Program",
    "RequestId": "0HMVHMEUBTND3:00000001",
    "RequestPath": "/api",
    "ConnectionId": "0HMVHMEUBTND3",
    "Scope": [
      "Start processing the request with trace: 0HMVHMEUBTND3:00000001"
    ]
  }
]

How to reproduce this issue:

  1. enable ssl-passthrough in nginx-ingress-controller;
  2. enable ssl-passthrough in nginx-ingress-rule;
  3. perform request from external ip;
  4. log request headers/client ip

Anything else we need to know:

Kindly help me figure out what I might be missing here. How can I ensure that the client IP is preserved as expected?

@yoyrandao yoyrandao added the kind/bug Categorizes issue or PR as related to a bug. label Nov 30, 2023
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Nov 30, 2023
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@longwuyuan
Copy link
Contributor

/remove-kind bug

  • Please test with only these 2 annotations because other ssl related annotations likely make no sense when you are not terminating ssl on the controller
     nginx.ingress.kubernetes.io/ssl-passthrough: 'true'
     nginx.ingress.kubernetes.io/force-ssl-redirect: "true"

@k8s-ci-robot k8s-ci-robot added needs-kind Indicates a PR lacks a `kind/foo` label and requires one. and removed kind/bug Categorizes issue or PR as related to a bug. labels Dec 1, 2023
@strongjz
Copy link
Member

What cloud loadblanacer are you using? That determines which method to use for get the client IP. L4 proxy protocol
L7 X-Forwarded-for header

@strongjz strongjz self-assigned this Mar 28, 2024
@strongjz
Copy link
Member

/triage needs-information

@k8s-ci-robot k8s-ci-robot added the triage/needs-information Indicates an issue needs more information in order to work on it. label Mar 28, 2024
@mesiu84
Copy link

mesiu84 commented Jul 5, 2024

We have very similar configuration and the same issue, in the end in logs we see ingress IP and not the client IP.

Our configuration looks like this:
Internet -> LoadBalancer set to L4 proxy protocol -> nginx-ingress with ssl passthrough -> Apache (we need to use it in here because of some dependency that we have) -> App

The issue is that in Apache logs we see only Ingress IP as the one from the Client, and in the end the same IP is visable in our application that runs at the end.

Here are some configs we have:

Ingress-nginx values.yaml for helm:

metadata:
  namespace: ingress-nginx
controller:
  extraArgs:
    enable-ssl-passthrough: ""
  replicaCount: 1
  allowSnippetAnnotations: true
  service:
    type: LoadBalancer
    externalTrafficPolicy: "Local"
    annotations:
      << HERE GOES CUSTOM LOADBALANCER CONFIG TO SET IT TO WORK WITH L4 LAYER >>
  config:
    use-proxy-protocol: "true"
    enable-real-ip: "true"
    use-forwarded-headers: "true"

but I also tested following entries:

    real-ip-header: "proxy_protocol"
    forwarded-for-header: "proxy_protocol"
    compute-full-forwarded-for: "true"

none of those works

in ingress we have added following annotations:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/backend-protocol: HTTPS
    nginx.ingress.kubernetes.io/ssl-passthrough: "true"
  name: ingress
  namespace: app
spec:
  ingressClassName: nginx
  rules:
  - host: example.com
    http:
      paths:
      - backend:
          service:
            name: apache-service
            port:
              number: 443
        path: /
        pathType: Prefix

We need to set it up like this because in the end SSL certificate need to be applied on Apache level because one of modules for Apache that we use can't work without SSL.

I tested multiple configurations and in the end I can't see client IP in any logs. I even did a tcpdump on Apache to see what comes in the header but there is no other IP then the one from Ingress.

Other issue is that in ingress logs with above configuration we don't see any information about accessing our application web page :-) probably because we use that ssl passthrough but that's something for a different bug.

Any help would be welcome, does anyone has some similar configuration and was able to pass the client IP?

I'll just add that if we bypass ingress and connect Apache service directly to LoadBalancer we see the client IP in logs and all works ok, so the issue is somewhere in nginx, it's either a configuration issue that we're missing or some bug with nginx not being able to pass that IP properly

@longwuyuan
Copy link
Contributor

@mesiu84 remove the backend-protocol annotation as I am not sure it helps when there is no backend but a direct passthrough. Also your post does not say which cloud. which cloud is it ?

@longwuyuan
Copy link
Contributor

And also add the nginx.ingress.kubernetes.io/force-ssl-redirect: "true" annotation as that is needed

@mesiu84
Copy link

mesiu84 commented Jul 5, 2024

@longwuyuan I can remove backend-protocol but it doesn't change anything so not a big change for me, it's here mostly to be sure that in the end https would be used. But adding that annotation you pointed out doesn't change anything, I still see ingress IP in apache logs :-(

@longwuyuan
Copy link
Contributor

@mesiu84 my test works so to test your environ ;

  • remove backend-protocol annotation
  • add the force-ssl-redirect annotation
  • write back which cloud

@longwuyuan
Copy link
Contributor

the question about which cloud has been asked 2 times but answer has not been posted even 1 time. so its not possible to know if you have proxy-protocol enabled on the cloud/infra LB. For retaining the client ip, enabling proxy-protocol is required on both the controller and also on the LB https://kubernetes.github.io/ingress-nginx/user-guide/miscellaneous/#proxy-protocol

@mesiu84
Copy link

mesiu84 commented Jul 5, 2024

@mesiu84 my test works so to test your environ ;

* remove backend-protocol annotation

[DONE]

* add the force-ssl-redirect annotation

[DONE]

* write back which cloud

UpCloud

items:
- apiVersion: networking.k8s.io/v1
  kind: Ingress
  metadata:
    annotations:
      nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
      nginx.ingress.kubernetes.io/ssl-passthrough: "true"
    name: ingress
    namespace: app
  spec:
    ingressClassName: nginx
    rules:
    - host: example.com
      http:
        paths:
        - backend:
            service:
              name: apache-service
              port:
                number: 443
          path: /
          pathType: Prefix

@mesiu84
Copy link

mesiu84 commented Jul 5, 2024

if you need here is a config for UpCloud LoadBalancer, it's from their official documentation and works perfectly

    annotations:
      service.beta.kubernetes.io/upcloud-load-balancer-config: |
        {
          "frontends": [
            {
              "name": "http",
              "mode": "http",
              "port": 80,
              "default_backend": "http"
            },
            {
              "name": "https",
              "mode": "tcp",
              "port": 443,
              "default_backend": "https",
              "tls_configs": []
            }
          ],
          "backends": [
            {
              "name": "http"
            },
            {
              "name": "https",
              "properties": {
                "outbound_proxy_protocol": "v1"
              }
            }
          ]
        }

but there is nothing special here

@longwuyuan
Copy link
Contributor

thnaks.
can you show

  • kubectl -n ingress-nginx describe cm ingress-nginx-controller
  • screenshot of UpCloud LB configuration that is proof that proxy-protocol is enabled on the UpCloud LoadBalancer
  • kubectl -n ingress-nginx get all -o wide
  • Your exact curl command as you execute from your laptop with -v and its response
  • The output of curl ifconfig.me from your laptop
  • kubectl -n ingress-nginx logs $CONTROLLER_POD_NAME

@mesiu84
Copy link

mesiu84 commented Jul 5, 2024

@longwuyuan

kubectl -n ingress-nginx describe cm ingress-nginx-controller
Name:         ingress-nginx-controller
Namespace:    ingress-nginx
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=ingress-nginx
              app.kubernetes.io/part-of=ingress-nginx
              app.kubernetes.io/version=1.10.1
              argocd.argoproj.io/instance=01-ingress-nginx
              helm.sh/chart=ingress-nginx-4.10.1
Annotations:  <none>

Data
====
use-forwarded-headers:
----
true
use-proxy-protocol:
----
true
allow-snippet-annotations:
----
true
enable-real-ip:
----
true

BinaryData
====

Events:
  Type    Reason  Age                From                      Message
  ----    ------  ----               ----                      -------
  Normal  UPDATE  22m (x2 over 22h)  nginx-ingress-controller  ConfigMap ingress-nginx/ingress-nginx-controller
  Normal  CREATE  22m                nginx-ingress-controller  ConfigMap ingress-nginx/ingress-nginx-controller
  Normal  CREATE  5m35s              nginx-ingress-controller  ConfigMap ingress-nginx/ingress-nginx-controller

image

kubectl -n ingress-nginx get all -o wide
NAME                                            READY   STATUS      RESTARTS   AGE     IP              NODE               NOMINATED NODE   READINESS GATES
pod/ingress-nginx-admission-create-djmp4        0/1     Completed   0          22h     192.168.1.214   node-d62vw   <none>           <none>
pod/ingress-nginx-controller-5647f457dc-b2bd7   1/1     Running     0          8m12s   192.168.1.199   node-d62vw   <none>           <none>

NAME                                         TYPE           CLUSTER-IP       EXTERNAL-IP                                           PORT(S)                      AGE   SELECTOR
service/ingress-nginx-controller             LoadBalancer   XXX.XXX.XXX.XXX   lb-SOMETHING_SOMETHING-1.upcloudlb.com   80:31112/TCP,443:32442/TCP   71d   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/ingress-nginx-controller-admission   ClusterIP      10.132.1.8       <none>                                                443/TCP                      71d   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/ingress-nginx-controller-metrics     ClusterIP      10.131.217.194   <none>                                                10254/TCP                    71d   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                                                                                                                     SELECTOR
deployment.apps/ingress-nginx-controller   1/1     1            1           71d   controller   registry.k8s.io/ingress-nginx/controller:v1.10.1@sha256:e24f39d3eed6bcc239a56f20098878845f62baa34b9f2be2fd2c38ce9fb0f29e   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx

NAME                                                  DESIRED   CURRENT   READY   AGE   CONTAINERS   IMAGES                                                                                                                     SELECTOR
replicaset.apps/ingress-nginx-controller-5647f457dc   1         1         1       58d   controller   registry.k8s.io/ingress-nginx/controller:v1.10.1@sha256:e24f39d3eed6bcc239a56f20098878845f62baa34b9f2be2fd2c38ce9fb0f29e   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=5647f457dc
replicaset.apps/ingress-nginx-controller-6bdd97f57c   0         0         0       71d   controller   registry.k8s.io/ingress-nginx/controller:v1.10.0@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=6bdd97f57c
replicaset.apps/ingress-nginx-controller-7c65df447b   0         0         0       8d    controller   registry.k8s.io/ingress-nginx/controller:v1.10.1@sha256:e24f39d3eed6bcc239a56f20098878845f62baa34b9f2be2fd2c38ce9fb0f29e   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=7c65df447b

NAME                                       COMPLETIONS   DURATION   AGE   CONTAINERS   IMAGES                                                                                                                              SELECTOR
job.batch/ingress-nginx-admission-create   1/1           4s         22h   create       registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366   batch.kubernetes.io/controller-uid=e104bc4b-38fa-4301-9ae5-281065fda21d

I'm not using curl, I'm opening a page in a browser and checking the header, but here you have an output from curl

* Host example.com:443 was resolved.
* IPv6: (none)
* IPv4: XXX.XXX.XXX.XXX
*   Trying XXX.XXX.XXX.XXX:443...
* Connected to example.com (XXX.XXX.XXX.XX) port 443
* ALPN: curl offers h2,http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
*  CAfile: /etc/ssl/certs/ca-certificates.crt
*  CApath: /etc/ssl/certs
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 / X25519 / RSASSA-PSS
* ALPN: server accepted http/1.1
* Server certificate:
*  subject: CN=*.example.com
*  start date: May 29 07:17:33 2024 GMT
*  expire date: Aug 27 07:17:32 2024 GMT
*  subjectAltName: host "example.com" matched cert's "*.example.com"
*  issuer: C=US; O=Let's Encrypt; CN=R3
*  SSL certificate verify ok.
*   Certificate level 0: Public key type RSA (2048/112 Bits/secBits), signed using sha256WithRSAEncryption
*   Certificate level 1: Public key type RSA (2048/112 Bits/secBits), signed using sha256WithRSAEncryption
*   Certificate level 2: Public key type RSA (4096/152 Bits/secBits), signed using sha256WithRSAEncryption
* using HTTP/1.x
> GET /itsm HTTP/1.1
> Host: example.com
> User-Agent: curl/8.5.0
> Accept: */*
> 
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* old SSL session ID is stale, removing
< HTTP/1.1 302 Found
< Date: Fri, 05 Jul 2024 11:40:15 GMT
< Server: Apache
< X-Frame-Options: SAMEORIGIN
< Strict-Transport-Security: max-age=15768000; includeSubdomains;
< X-Content-Type-Options: nosniff
< Set-Cookie: _opensaml_req_ss%3Amem%3A3a8aebaec2e37b5268e6962dc29f75952d668f21b90b2701bb5208822f0bc450=_19b1a2e3f5c615474e8f666c544f62ff; path=/; secure; HttpOnly; SameSite=None; SameSite=None
< Expires: Wed, 01 Jan 1997 12:00:00 GMT
< Cache-Control: private,no-store,no-cache,max-age=0
< Location: https://example.com:443/app/profile/SAML2/Redirect/SSO?SAMLRequest=jZLRToMwFIZfhfQeCgyYNmPJ3C5cMh0Z6IU3hpWDaywt9pSpby8b08ybZdf9%2B%2F3nfO0Ey0a2bNbZndrARwdona9GKmTHg5R0RjFdokCmygaQWc7y2cOKhZ7PWqOt5loSZ4YIxgqt5lph14DJwewFh6fNKiU7a1tklHbv2GruQQ3cApe6q9wK9h7XDc13YrvVEuzOQ9T00BDSbJ0XxFn0IwlVHuDXoFgUjaioGtoPVwsJJ9YGKmH6MM3zNXGWi5S8BrfboAxhVMc8CeJoHMFNnSQJj6OoTsK67mOIHSwV2lLZlIR%2BGLn%2B2PXjIghY5LMgfiFOdnJwJ1Ql1NtlYdshhOy%2BKDJ32O8ZDB536wNkOjloZ8dic%2FYQl7Hlr30yvcI1%2Frme0LO2obpljz1%2Buci0FPzbmUmpP%2BcGSgspCQidDlf%2Bf5jpDw%3D%3D&RelayState=ss%3Amem%3A3a8aebaec2e37b5268e6962dc29f75952d668f21b90b2701bb5208822f0bc450
< Content-Length: 817
< Content-Type: text/html; charset=iso-8859-1
< 
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>302 Found</title>
</head><body>
<h1>Found</h1>
<p>The document has moved <a href="https://example.com:443/app/profile/SAML2/Redirect/SSO?SAMLRequest=jZLRToMwFIZfhfQeCgyYNmPJ3C5cMh0Z6IU3hpWDaywt9pSpby8b08ybZdf9%2B%2F3nfO0Ey0a2bNbZndrARwdona9GKmTHg5R0RjFdokCmygaQWc7y2cOKhZ7PWqOt5loSZ4YIxgqt5lph14DJwewFh6fNKiU7a1tklHbv2GruQQ3cApe6q9wK9h7XDc13YrvVEuzOQ9T00BDSbJ0XxFn0IwlVHuDXoFgUjaioGtoPVwsJJ9YGKmH6MM3zNXGWi5S8BrfboAxhVMc8CeJoHMFNnSQJj6OoTsK67mOIHSwV2lLZlIR%2BGLn%2B2PXjIghY5LMgfiFOdnJwJ1Ql1NtlYdshhOy%2BKDJ32O8ZDB536wNkOjloZ8dic%2FYQl7Hlr30yvcI1%2Frme0LO2obpljz1%2Buci0FPzbmUmpP%2BcGSgspCQidDlf%2Bf5jpDw%3D%3D&amp;RelayState=ss%3Amem%3A3a8aebaec2e37b5268e6962dc29f75952d668f21b90b2701bb5208822f0bc450">here</a>.</p>
</body></html>
* Connection #0 to host example.com left intact

my actual IP is 91.150.174.50 but it's not static and will change after restart/reconnection
but I don't see it anywhere in logs, I only see 192.168... from ingress-nginx pod

I don't have any logs from nginx, as I already pointed that out. It's probably related to that passthrough option, I think I saw a bug about it but can't find it now

@mesiu84
Copy link

mesiu84 commented Jul 5, 2024

here is also the header that I have from browser
image
as you can see there is nothing about X-Forwarded-For or anything similar

@longwuyuan
Copy link
Contributor

thanks for the info.

  • You showed some json about the LB that can not be analyzed
  • The json you showed says v1. Are there other options for that field ?
  • You showed screenshot about LB but there is no proxy-protocol config visible there so hard to trust that LB has proxy-protocol capability and its enabled
  • You showed screenshot of browser headers but its useless in understanding and confirming that your LB has proxy-protocol capability and that proxy-protocol is enabled
  • You can run a simple test I will explain below without this ssl-passthrough and confirm is proxy-protocol is enabled in the LB
    • kubectl create deploy test0 --image nginx:alpine --port 80
    • kubectl expose deploy test0 --port 80
    • kubectl create ing test0 --class nginx --rule test0.dev.mycompany.com/"*"=test0:80
    • curl -v test0.dev.mycompany.com --resolve test0.dev.mycompany.com:80:$IPADDRESS-OF-UPCLOUD-LB
    • kubectl -n ingress-nginx logs $CONTROLLER-POD-NAME
    • curl ifconfig.me

post this information so it becomes simple to debug. Come talk on slack if you can not provide all the requested info properly in one post. There are not many resources here so can not keep back & forth for just getting the basic debug info from you

@mesiu84
Copy link

mesiu84 commented Jul 5, 2024

thanks for the info.

* You showed some json about the LB that can not be analyzed

* The json you showed says v1. Are there other options for that field ?

* You showed screenshot about LB but there is no proxy-protocol config visible there so hard to trust that LB has proxy-protocol capability and its enabled

* You showed screenshot of browser headers but its useless in understanding and confirming that your LB has proxy-protocol capability and that proxy-protocol is enabled

* You can run a simple test I will explain below without this ssl-passthrough and confirm is proxy-protocol is enabled in the LB
  
  * kubectl create deploy test0 --image nginx:alpine --port 80
  * kubectl expose deploy test0 --port 80
  * kubectl create ing test0 --class nginx --rule test0.dev.mycompany.com/"*"=test0:80
  * curl -v test0.dev.mycompany.com --resolve test0.dev.mycompany.com:80:$IPADDRESS-OF-UPCLOUD-LB
  * kubectl -n ingress-nginx logs $CONTROLLER-POD-NAME
  * curl ifconfig.me

post this information so it becomes simple to debug. Come talk on slack if you can not provide all the requested info properly in one post. There are not many resources here so can not keep back & forth for just getting the basic debug info from you

How can I contact you on Slack?

@mesiu84
Copy link

mesiu84 commented Jul 5, 2024

here is that screenshot from upcloud where you can select other versions of proxy protocol
image
in our case only v1 and disabled works, v2 doesn't work at all, but honestly right now I don't remember what is the difference between those

The screenshot that I added is not useless, it shows that for https you use TCP mode which is Proxy according to upcloud documentation, as a comparison you can see that for http it's still set to HTTP

@mesiu84
Copy link

mesiu84 commented Jul 5, 2024

@longwuyuan
2 last things

  1. log from nginx:
I0705 12:18:27.519158       7 status.go:304] "updating Ingress status" namespace="default" ingress="test0" currentValue=null newValue=[{"hostname":"lb-SOMETHING_SOMMETHING-1.upcloudlb.com"}]
I0705 12:18:27.524309       7 event.go:364] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"test0", UID:"d779c9d4-071e-46cf-b619-4a249c51d88d", APIVersion:"networking.k8s.io/v1", ResourceVersion:"85488683", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync

no IP :-(

  1. my current ip
curl ifconfig.me
91.150.174.50

but that will change in a moment since I need to switch to a different place and have to disconnect :-)

@longwuyuan
Copy link
Contributor

  • thank you for the info
  • slack is channel ingress-nginx on kubernetes.slack.com . Register at slack.kubernetes.io
  • the info you are giving is coming in pieces and un-corelated. so it does not help in knowing status pf proxy-protocol on the LB. for example you showed 2 log messages. there is no ipaddress info there. i typed the exact commands for you but you are doing that. so currently i have no comment based on the uncorelated info that you have provided. it delays triaging on this issue :-)

@longwuyuan
Copy link
Contributor

  • thank you for the info
  • slack is channel ingress-nginx on kubernetes.slack.com . Register at slack.kubernetes.io
  • provide the info in one single post as per the steps I typed. it will save your time & effort

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
Development

No branches or pull requests

5 participants