Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TLS certificate lookup fails for server aliases unless specified host is loaded at least once #11067

Open
captainswain opened this issue Mar 5, 2024 · 7 comments
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@captainswain
Copy link

What happened:

Reopening issue which is the same as #4832

I defined an ingress resource with a server alias on a separate domain, using the nginx.ingress.kubernetes.io/server-alias, and 2 certificates one wildcard that matches the primary domain and a wildcard host that matches the alias. When sending a request that matches the alias but not the primary host, the fake self-signed certificate is used. When sending a request that matches the primary host, the configured certificate is used. If I manually specify a different subdomain on the server-alias as a host in the ingress the certificate is loaded as intended for the service-alias subdomain.

What you expected to happen:
I expected to receive the configured certificate for the server-alias used in the ingress, without having to define it as a host.

NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):


NGINX Ingress controller
Release: v1.10.0
Build: 71f78d4
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.25.3


Kubernetes version (use kubectl version):
Client Version: v1.28.7+k3s1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.7+k3s1

Environment:

  • Cloud provider or hardware configuration:

  • OS (e.g. from /etc/os-release): Fedora 39

  • Kernel (e.g. uname -a): Linux fedora 6.6.13-200.fc39.x86_64 Basic structure  #1 SMP PREEMPT_DYNAMIC Sat Jan 20 18:03:28 UTC 2024 x86_64 GNU/Linux

  • Install tools:

    • k3s
  • Basic cluster related info:

    • v1.28.7+k3s1
    • fedora Ready control-plane,master 158m v1.28.7+k3s1 192.168.1.109 Fedora Linux 39 (Workstation Edition) 6.6.13-200.fc39.x86_64 containerd://1.7.11-k3s2
  • How was the ingress-nginx-controller installed:

    • If helm was used then please show output of helm ls -A | grep -i ingress
    ingress-nginx   ingress-nginx   1               2024-03-04 13:59:07.9611125 -0800 PST   deployed        ingress-nginx-4.10.0    1.10.0     
    
    • If helm was used then please show output of helm -n <ingresscontrollernamespace> get values <helmreleasename>
USER-SUPPLIED VALUES:
null
  • If helm was not used, then copy/paste the complete precise command used to install the controller, along with the flags and options used

  • if you have more than one instance of the ingress-nginx-controller installed in the same cluster, please provide details for all the instances

  • Current State of the controller:

    • kubectl describe ingressclasses
  Name:         nginx
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=ingress-nginx
              app.kubernetes.io/part-of=ingress-nginx
              app.kubernetes.io/version=1.10.0
              helm.sh/chart=ingress-nginx-4.10.0
Annotations:  meta.helm.sh/release-name: ingress-nginx
              meta.helm.sh/release-namespace: ingress-nginx
Controller:   k8s.io/ingress-nginx
Events:       <none>
  • kubectl -n <ingresscontrollernamespace> get all -A -o wide
NAMESPACE       NAME                                                READY   STATUS    RESTARTS       AGE    IP          NODE     NOMINATED NODE   READINESS GATES
kube-system     pod/coredns-6799fbcd5-l5f94                         1/1     Running   1 (109m ago)   170m   10.42.0.5   fedora   <none>           <none>
kube-system     pod/local-path-provisioner-6c86858495-9vsg8         1/1     Running   1 (109m ago)   170m   10.42.0.6   fedora   <none>           <none>
kube-system     pod/svclb-ingress-nginx-controller-c11be5c2-zpz96   2/2     Running   2 (109m ago)   141m   10.42.0.4   fedora   <none>           <none>
default         pod/demo-5f7bb54887-qkhmf                           1/1     Running   1 (109m ago)   140m   10.42.0.2   fedora   <none>           <none>
ingress-nginx   pod/ingress-nginx-controller-6dc8c8fdf4-jwwbp       1/1     Running   1 (109m ago)   141m   10.42.0.3   fedora   <none>           <none>
kube-system     pod/metrics-server-67c658944b-6gjkg                 1/1     Running   1 (109m ago)   170m   10.42.0.7   fedora   <none>           <none>
default         pod/echo-test-864d879bcf-xl2qf                      1/1     Running   0              64m    10.42.0.8   fedora   <none>           <none>
default         pod/echo-test-864d879bcf-96bfj                      1/1     Running   0              64m    10.42.0.9   fedora   <none>           <none>

NAMESPACE       NAME                                         TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                      AGE    SELECTOR
default         service/kubernetes                           ClusterIP      10.43.0.1       <none>          443/TCP                      170m   <none>
kube-system     service/kube-dns                             ClusterIP      10.43.0.10      <none>          53/UDP,53/TCP,9153/TCP       170m   k8s-app=kube-dns
kube-system     service/metrics-server                       ClusterIP      10.43.85.57     <none>          443/TCP                      170m   k8s-app=metrics-server
ingress-nginx   service/ingress-nginx-controller-admission   ClusterIP      10.43.51.35     <none>          443/TCP                      141m   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
default         service/demo                                 ClusterIP      10.43.129.220   <none>          80/TCP                       140m   app=demo
ingress-nginx   service/ingress-nginx-controller             LoadBalancer   10.43.28.240    192.168.1.109   80:32206/TCP,443:31521/TCP   141m   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
default         service/echo-test                            ClusterIP      10.43.24.96     <none>          80/TCP                       64m    app=echo-test

NAMESPACE     NAME                                                     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE    CONTAINERS             IMAGES                                                SELECTOR
kube-system   daemonset.apps/svclb-ingress-nginx-controller-c11be5c2   1         1         1       1            1           <none>          141m   lb-tcp-80,lb-tcp-443   rancher/klipper-lb:v0.4.5,rancher/klipper-lb:v0.4.5   app=svclb-ingress-nginx-controller-c11be5c2

NAMESPACE       NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE    CONTAINERS               IMAGES                                                                                                                     SELECTOR
kube-system     deployment.apps/local-path-provisioner     1/1     1            1           170m   local-path-provisioner   rancher/local-path-provisioner:v0.0.26                                                                                     app=local-path-provisioner
kube-system     deployment.apps/coredns                    1/1     1            1           170m   coredns                  rancher/mirrored-coredns-coredns:1.10.1                                                                                    k8s-app=kube-dns
default         deployment.apps/demo                       1/1     1            1           140m   nginx                    nginx                                                                                                                      app=demo
ingress-nginx   deployment.apps/ingress-nginx-controller   1/1     1            1           141m   controller               registry.k8s.io/ingress-nginx/controller:v1.10.0@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
kube-system     deployment.apps/metrics-server             1/1     1            1           170m   metrics-server           rancher/mirrored-metrics-server:v0.6.3                                                                                     k8s-app=metrics-server
default         deployment.apps/echo-test                  2/2     2            2           64m    echo-test                nginxdemos/hello                                                                                                           app=echo-test

NAMESPACE       NAME                                                  DESIRED   CURRENT   READY   AGE    CONTAINERS               IMAGES                                                                                                                     SELECTOR
kube-system     replicaset.apps/local-path-provisioner-6c86858495     1         1         1       170m   local-path-provisioner   rancher/local-path-provisioner:v0.0.26                                                                                     app=local-path-provisioner,pod-template-hash=6c86858495
kube-system     replicaset.apps/coredns-6799fbcd5                     1         1         1       170m   coredns                  rancher/mirrored-coredns-coredns:1.10.1                                                                                    k8s-app=kube-dns,pod-template-hash=6799fbcd5
default         replicaset.apps/demo-5f7bb54887                       1         1         1       140m   nginx                    nginx                                                                                                                      app=demo,pod-template-hash=5f7bb54887
ingress-nginx   replicaset.apps/ingress-nginx-controller-6dc8c8fdf4   1         1         1       141m   controller               registry.k8s.io/ingress-nginx/controller:v1.10.0@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=6dc8c8fdf4
kube-system     replicaset.apps/metrics-server-67c658944b             1         1         1       170m   metrics-server           rancher/mirrored-metrics-server:v0.6.3                                                                                     k8s-app=metrics-server,pod-template-hash=67c658944b
default         replicaset.apps/echo-test-864d879bcf                  2         2         2       64m    echo-test                nginxdemos/hello                                                                                                           app=echo-test,pod-template-hash=864d879bcf

  • kubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>
Name:             ingress-nginx-controller-6dc8c8fdf4-jwwbp
Namespace:        ingress-nginx
Priority:         0
Service Account:  ingress-nginx
Node:             fedora/192.168.1.109
Start Time:       Mon, 04 Mar 2024 13:59:18 -0800
Labels:           app.kubernetes.io/component=controller
                  app.kubernetes.io/instance=ingress-nginx
                  app.kubernetes.io/managed-by=Helm
                  app.kubernetes.io/name=ingress-nginx
                  app.kubernetes.io/part-of=ingress-nginx
                  app.kubernetes.io/version=1.10.0
                  helm.sh/chart=ingress-nginx-4.10.0
                  pod-template-hash=6dc8c8fdf4
Annotations:      <none>
Status:           Running
IP:               10.42.0.3
IPs:
  IP:           10.42.0.3
Controlled By:  ReplicaSet/ingress-nginx-controller-6dc8c8fdf4
Containers:
  controller:
    Container ID:    containerd://7bb9af2efeeef87b0e9afa6a3cb57a508159bef066a8f034eed5ad056126bc4c
    Image:           registry.k8s.io/ingress-nginx/controller:v1.10.0@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c
    Image ID:        registry.k8s.io/ingress-nginx/controller@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c
    Ports:           80/TCP, 443/TCP, 8443/TCP
    Host Ports:      0/TCP, 0/TCP, 0/TCP
    SeccompProfile:  RuntimeDefault
    Args:
      /nginx-ingress-controller
      --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
      --election-id=ingress-nginx-leader
      --controller-class=k8s.io/ingress-nginx
      --ingress-class=nginx
      --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
      --validating-webhook=:8443
      --validating-webhook-certificate=/usr/local/certificates/cert
      --validating-webhook-key=/usr/local/certificates/key
      --enable-metrics=false
    State:          Running
      Started:      Mon, 04 Mar 2024 14:31:28 -0800
    Last State:     Terminated
      Reason:       Unknown
      Exit Code:    255
      Started:      Mon, 04 Mar 2024 13:59:30 -0800
      Finished:     Mon, 04 Mar 2024 14:31:26 -0800
    Ready:          True
    Restart Count:  1
    Requests:
      cpu:      100m
      memory:   90Mi
    Liveness:   http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
    Readiness:  http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:       ingress-nginx-controller-6dc8c8fdf4-jwwbp (v1:metadata.name)
      POD_NAMESPACE:  ingress-nginx (v1:metadata.namespace)
      LD_PRELOAD:     /usr/local/lib/libmimalloc.so
    Mounts:
      /usr/local/certificates/ from webhook-cert (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hv6n6 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  webhook-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-admission
    Optional:    false
  kube-api-access-hv6n6:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason  Age                 From                      Message
  ----    ------  ----                ----                      -------
  Normal  RELOAD  30m (x8 over 110m)  nginx-ingress-controller  NGINX reload triggered due to a change in configuration

  • kubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>
Name:                     ingress-nginx-controller
Namespace:                ingress-nginx
Labels:                   app.kubernetes.io/component=controller
                          app.kubernetes.io/instance=ingress-nginx
                          app.kubernetes.io/managed-by=Helm
                          app.kubernetes.io/name=ingress-nginx
                          app.kubernetes.io/part-of=ingress-nginx
                          app.kubernetes.io/version=1.10.0
                          helm.sh/chart=ingress-nginx-4.10.0
Annotations:              meta.helm.sh/release-name: ingress-nginx
                          meta.helm.sh/release-namespace: ingress-nginx
Selector:                 app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.43.28.240
IPs:                      10.43.28.240
LoadBalancer Ingress:     192.168.1.109
Port:                     http  80/TCP
TargetPort:               http/TCP
NodePort:                 http  32206/TCP
Endpoints:                10.42.0.3:80
Port:                     https  443/TCP
TargetPort:               https/TCP
NodePort:                 https  31521/TCP
Endpoints:                10.42.0.3:443
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

  • Current state of ingress object, if applicable:

    • kubectl -n <appnamespace> get all,ing -o wide
    • kubectl -n <appnamespace> describe ing <ingressname>
    • If applicable, then, your complete and exact curl/grpcurl command (redacted if required) and the reponse to the curl/grpcurl command with the -v flag
  • Others:

    • Any other related information like ;
      • copy/paste of the snippet (if applicable)
      • kubectl describe ... of any custom configmap(s) created and in use
      • Any other related information that may help

How to reproduce this issue:

Install K3s (Any Distro, Seen on EKS as well)

Install the ingress controller using helm

helm upgrade --install ingress-nginx ingress-nginx \
  --repo https://kubernetes.github.io/ingress-nginx \
  --namespace ingress-nginx --create-namespace

Create Dummy certificates

# Create a root CA 
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout rootCA.key -out rootCA.pem -subj "/C=US/ST=New York/L=New York/O=Example Inc./OU=Root CA/CN=example.com"

cat > cluster_foo_example.conf <<EOF
[req]
default_bits = 2048
prompt = no
default_md = sha256
x509_extensions = v3_req
distinguished_name = dn

[dn]
C = US
ST = New York
L = New York
O = "Example, Inc."
OU = "Cluster Foo Example"
CN = cluster.foo.example

[v3_req]
subjectAltName = @alt_names

[alt_names]
DNS.1 = cluster.foo.example
DNS.2 = *.cluster.foo.example
EOF

openssl req -new -nodes -x509 -newkey rsa:2048 -keyout cluster_foo_example.key -out cluster_foo_example.crt -config cluster_foo_example.conf -days 365

cat > random_bar_example.conf <<EOF
[req]
default_bits = 2048
prompt = no
default_md = sha256
x509_extensions = v3_req
distinguished_name = dn

[dn]
C = US
ST = California
L = San Francisco
O = "Bar Inc."
OU = "Random Bar Example"
CN = random.bar.example

[v3_req]
subjectAltName = @alt_names

[alt_names]
DNS.1 = random.bar.example
DNS.2 = *.random.bar.example
EOF


openssl req -new -nodes -x509 -newkey rsa:2048 -keyout random_bar_example.key -out random_bar_example.crt -config random_bar_example.conf -days 365

# Create the Kubernetes secret for cluster.foo.example
kubectl create secret tls cluster-foo-tls \
  --key cluster_foo_example.key \
  --cert cluster_foo_example.crt \
  --namespace ingress-nginx

# Create the Kubernetes secret for random.bar.example
kubectl create secret tls random-bar-tls \
  --key random_bar_example.key \
  --cert random_bar_example.crt \
  --namespace ingress-nginx

Install an application that will act as default backend (is just an echo app)

kubectl apply -n ingress-nginx -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/http-svc.yaml

Create an ingress (please add any additional annotation required)

echo "
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations: 
    nginx.ingress.kubernetes.io/server-alias: 'test.cluster.foo.example'
  name: test-ingress
  namespace: ingress-nginx
spec:
  ingressClassName: nginx
  tls:
  - hosts:
    - cluster.foo.example
    - '*.cluster.foo.example'
    secretName: cluster-foo-tls
  - hosts:
    - random.bar.example
    - '*.random.bar.example'
    secretName: random-bar-tls
  rules:
  - host: test.random.bar.example
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: http-svc
            port:
              number: 80
" | kubectl apply -f -


make a request

Update /etc/host to point to 127.0.0.1 for test.cluster.foo.example and test.random.bar.example

➜ curl --insecure -vvI https://test.random.bar.example 2>&1 | awk 'BEGIN { cert=0 } /^\* SSL connection/ { cert=1 } /^\*/ { if (cert) print }'  

* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN: server accepted h2
* Server certificate:
*  subject: C=US; ST=California; L=San Francisco; O=Bar Inc.; OU=Random Bar Example; CN=random.bar.example


➜ curl --insecure -vvI https://test.cluster.foo.example/ 2>&1 | awk 'BEGIN { cert=0 } /^\* SSL connection/ { cert=1 } /^\*/ { if (cert) print }'

* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN: server accepted h2
* Server certificate:
*  subject: O=Acme Co; CN=Kubernetes Ingress Controller Fake Certificate
*  start date: Mar  5 06:54:48 2024 GMT
*  expire date: Mar  5 06:54:48 2025 GMT
*  issuer: O=Acme Co; CN=Kubernetes Ingress Controller Fake Certificate
*  SSL certificate verify result: self-signed certificate (18), continuing anyway.


Expect to see test.cluster.foo.example using the correct certificate

Anything else we need to know: If you add a valid host entry the cert is loaded (see comment in manifest above)

@captainswain captainswain added the kind/bug Categorizes issue or PR as related to a bug. label Mar 5, 2024
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Mar 5, 2024
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@longwuyuan
Copy link
Contributor

/remove-kind bug

  • Hoping to get some conclusive proof as data before applying the bug label
  • Can you post the links to docs/references about host not being required. Asking because I think there is text out there that hosts in tls and hosts in http fields must match.
  • Also I read that the server-alias implementation just copies the config of the host in a new server block, and juts sets the server-block name to the value of the alias.

@k8s-ci-robot k8s-ci-robot added needs-kind Indicates a PR lacks a `kind/foo` label and requires one. and removed kind/bug Categorizes issue or PR as related to a bug. labels Mar 5, 2024
@longwuyuan
Copy link
Contributor

/triage needs-information

@k8s-ci-robot k8s-ci-robot added the triage/needs-information Indicates an issue needs more information in order to work on it. label Mar 5, 2024
@captainswain
Copy link
Author

captainswain commented Mar 5, 2024

Hi there @longwuyuan,

First off hello and thanks for the reply!

Can you post the links to docs/references about host not being required. Asking because I think there is text out there that hosts in tls and hosts in http fields must match.

Hi there I could not find specific documentation regarding the host not being required. In this case wouldn't the hosts and the tls hosts match on the wildcard?

Also I read that the server-alias implementation just copies the config of the host in a new server block, and juts sets the server-block name to the value of the alias.

Here is the server block with the alias created, it looks identical outside of the addition of the domain under the server_name.

	## start server test.random.bar.example
	server {
		server_name test.random.bar.example test.cluster.foo.example ;
		
		http2 on;
		
		listen 80  ;
		listen [::]:80  ;
		listen 443  ssl;
		listen [::]:443  ssl;
		
		set $proxy_upstream_name "-";
		
		ssl_certificate_by_lua_block {
			certificate.call()
		}
		
		location / {
			
			set $namespace      "ingress-nginx";
			set $ingress_name   "test-ingress";
			set $service_name   "http-svc";
			set $service_port   "80";
			set $location_path  "/";
			set $global_rate_limit_exceeding n;
			
			rewrite_by_lua_block {
				lua_ingress.rewrite({
					force_ssl_redirect = false,
					ssl_redirect = true,
					force_no_ssl_redirect = false,
					preserve_trailing_slash = false,
					use_port_in_redirects = false,
					global_throttle = { namespace = "", limit = 0, window_size = 0, key = { }, ignored_cidrs = { } },
				})
				balancer.rewrite()
				plugins.run()
			}
			
			# be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any
			# will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`
			# other authentication method such as basic auth or external auth useless - all requests will be allowed.
			#access_by_lua_block {
			#}
			
			header_filter_by_lua_block {
				lua_ingress.header()
				plugins.run()
			}
			
			body_filter_by_lua_block {
				plugins.run()
			}
			
			log_by_lua_block {
				balancer.log()
				
				plugins.run()
			}
			
			port_in_redirect off;
			
			set $balancer_ewma_score -1;
			set $proxy_upstream_name "ingress-nginx-http-svc-80";
			set $proxy_host          $proxy_upstream_name;
			set $pass_access_scheme  $scheme;
			
			set $pass_server_port    $server_port;
			
			set $best_http_host      $http_host;
			set $pass_port           $pass_server_port;
			
			set $proxy_alternative_upstream_name "";
			
			client_max_body_size                    1m;
			
			proxy_set_header Host                   $best_http_host;
			
			# Pass the extracted client certificate to the backend
			
			# Allow websocket connections
			proxy_set_header                        Upgrade           $http_upgrade;
			
			proxy_set_header                        Connection        $connection_upgrade;
			
			proxy_set_header X-Request-ID           $req_id;
			proxy_set_header X-Real-IP              $remote_addr;
			
			proxy_set_header X-Forwarded-For        $remote_addr;
			
			proxy_set_header X-Forwarded-Host       $best_http_host;
			proxy_set_header X-Forwarded-Port       $pass_port;
			proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
			proxy_set_header X-Forwarded-Scheme     $pass_access_scheme;
			
			proxy_set_header X-Scheme               $pass_access_scheme;
			
			# Pass the original X-Forwarded-For
			proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
			
			# mitigate HTTPoxy Vulnerability
			# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
			proxy_set_header Proxy                  "";
			
			# Custom headers to proxied server
			
			proxy_connect_timeout                   5s;
			proxy_send_timeout                      60s;
			proxy_read_timeout                      60s;
			
			proxy_buffering                         off;
			proxy_buffer_size                       4k;
			proxy_buffers                           4 4k;
			
			proxy_max_temp_file_size                1024m;
			
			proxy_request_buffering                 on;
			proxy_http_version                      1.1;
			
			proxy_cookie_domain                     off;
			proxy_cookie_path                       off;
			
			# In case of errors try the next upstream server before returning an error
			proxy_next_upstream                     error timeout;
			proxy_next_upstream_timeout             0;
			proxy_next_upstream_tries               3;
			
			proxy_pass http://upstream_balancer;
			
			proxy_redirect                          off;
			
		}
		
	}
	## end server test.random.bar.example

If you add another host to the ingress rules I do see a new server block created and the certificate works as intended.

Copy link

github-actions bot commented Apr 5, 2024

This is stale, but we won't close it automatically, just bare in mind the maintainers may be busy with other tasks and will reach your issue ASAP. If you have any question or request to prioritize this, please reach #ingress-nginx-dev on Kubernetes Slack.

@github-actions github-actions bot added the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Apr 5, 2024
@ftw-soft
Copy link

ftw-soft commented Oct 10, 2024

We're experiencing this bug within our system right now, and now we're unable to use Nginx ingress with aliases with a regular expressions and a wildcard certificates. As a workaround we had to create something like proxy on top of that, until it will be fixed

@longwuyuan
Copy link
Contributor

I am not sure what the bug is, even though I ack the info provided here is a helpful effort.

In particular the controller does not support wildcard sni.

Also, the values in the tls hosts field do not match the names used in the tlsSecret. The secret does not contain subject or alternate for test.cluster.foo.example and I welcome being corrected but currently I believe that is needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
Development

No branches or pull requests

4 participants