Can't use session-cookie-* annotations in place of nginx's proxy_cookie_flags directive #11163
Labels
kind/feature
Categorizes issue or PR as related to a new feature.
lifecycle/frozen
Indicates that an issue or PR should not be auto-closed due to staleness.
needs-priority
needs-triage
Indicates an issue or PR lacks a `triage/foo` label and requires one.
triage/needs-information
Indicates an issue needs more information in order to work on it.
What happened:
We're trying to allow cross-site requests from a local frontend app to a remote backend running on K8s, using
ingress-nginx
as Ingress Controller.We've been using the proxy_cookie_flags directive within our on-prem setup to achieve this by setting it to
proxy_cookie_flags ~ secure samesite=none;
.However, looking at #6368 it seemed like we'll have to use the Configuration Snipper annotation, which is not something we're excited about, as it's can be a potential security problem that we'd like to avoid. For reference, enabling the Configuration Snippet annotation and using it to add that directive does work for our use case, so we know it's possible.
I tried to play around with some of the other cookie-related annotations like:
What you expected to happen:
I was expecting that using the combination of the
session-cookie-name
annotation together withsession-cookie-samesite
andsession-cookie-secure
could lead me to the same results as using the Configuration Snippet and adding theproxy_cookie_flags ~ secure samesite=none;
directive, but it did not.If my expectation is incorrect and the
session-cookie-*
annotations can't be used to get the same result asproxy_cookie_flags ~ secure samesite=none;
, is there any reason why theproxy_cookie_flags
directive is not currently implemented as a separate annotation? And would you consider accepting a PR for it?NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):
v1.9.6
Kubernetes version (use
kubectl version
):Environment:
Cloud provider or hardware configuration: AWS/EKS
OS (e.g. from /etc/os-release): n/a
Kernel (e.g.
uname -a
): n/aInstall tools: Helm
Please mention how/where was the cluster created like kubeadm/kops/minikube/kind etc.
Basic cluster related info:
kubectl version
: EKS, v1.29How was the ingress-nginx-controller installed:
helm ls -A | grep -i ingress
helm -n <ingresscontrollernamespace> get values <helmreleasename>
Output (a bit unformatted, but no wall of text, at least)
$ helm -n ingress get values ingress-nginx USER-SUPPLIED VALUES: commonLabels: {} controller: addHeaders: {} admissionWebhooks: annotations: {} certManager: admissionCert: duration: "" enabled: false rootCert: duration: "" certificate: /usr/local/certificates/cert createSecretJob: name: create resources: {} securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 65532 seccompProfile: type: RuntimeDefault enabled: true existingPsp: "" extraEnvs: [] failurePolicy: Fail key: /usr/local/certificates/key labels: {} name: admission namespaceSelector: {} objectSelector: {} patch: enabled: true image: digest: sha256:25d6a5f11211cc5c3f9f2bf552b585374af287b4debf693cacbe2da47daa5084 image: ingress-nginx/kube-webhook-certgen pullPolicy: IfNotPresent registry: registry.k8s.io tag: v20231226-1a7112e06 labels: {} networkPolicy: enabled: false nodeSelector: kubernetes.io/os: linux podAnnotations: {} priorityClassName: "" securityContext: {} tolerations: [] patchWebhookJob: name: patch resources: {} securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 65532 seccompProfile: type: RuntimeDefault port: 8443 service: annotations: {} externalIPs: [] loadBalancerSourceRanges: [] servicePort: 443 type: ClusterIP affinity: {} allowSnippetAnnotations: false annotations: {} autoscaling: annotations: {} behavior: {} enabled: false maxReplicas: 11 minReplicas: 1 targetCPUUtilizationPercentage: 50 targetMemoryUtilizationPercentage: 50 autoscalingTemplate: [] config: real-ip-header: proxy_protocol use-proxy-protocol: true configAnnotations: {} configMapNamespace: "" containerName: controller containerPort: http: 80 https: 443 containerSecurityContext: {} customTemplate: configMapKey: "" configMapName: "" dnsConfig: {} dnsPolicy: ClusterFirst electionID: "" enableAnnotationValidations: false enableMimalloc: true enableTopologyAwareRouting: false existingPsp: "" extraArgs: {} extraContainers: [] extraEnvs: [] extraInitContainers: [] extraModules: [] extraVolumeMounts: [] extraVolumes: [] healthCheckHost: "" healthCheckPath: /healthz hostAliases: [] hostNetwork: false hostPort: enabled: false ports: http: 80 https: 443 hostname: {} image: allowPrivilegeEscalation: false chroot: false digest: sha256:1405cc613bd95b2c6edd8b2a152510ae91c7e62aea4698500d23b2145960ab9c digestChroot: sha256:7eb46ff733429e0e46892903c7394aff149ac6d284d92b3946f3baf7ff26a096 image: ingress-nginx/controller pullPolicy: IfNotPresent readOnlyRootFilesystem: false registry: registry.k8s.io runAsNonRoot: true runAsUser: 101 seccompProfile: type: RuntimeDefault tag: v1.9.6 ingressClass: nginx ingressClassByName: false ingressClassResource: controllerValue: k8s.io/ingress-nginx default: true enabled: true name: nginx parameters: {} keda: apiVersion: keda.sh/v1alpha1 behavior: {} cooldownPeriod: 300 enabled: false maxReplicas: 11 minReplicas: 1 pollingInterval: 30 restoreToOriginalReplicaCount: false scaledObject: annotations: {} triggers: [] kind: Deployment labels: {} lifecycle: preStop: exec: command: - /wait-shutdown livenessProbe: failureThreshold: 5 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 maxmindLicenseKey: "" metrics: enabled: true port: 10254 portName: metrics prometheusRule: additionalLabels: {} enabled: false rules: [] service: annotations: {} externalIPs: [] labels: {} loadBalancerSourceRanges: [] servicePort: 10254 type: ClusterIP serviceMonitor: additionalLabels: release: prometheus-stack annotations: {} enabled: true metricRelabelings: [] namespace: "" namespaceSelector: {} relabelings: [] scrapeInterval: 30s targetLabels: [] minAvailable: 1 minReadySeconds: 0 name: controller networkPolicy: enabled: false nodeSelector: kubernetes.io/os: linux opentelemetry: containerSecurityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 65532 seccompProfile: type: RuntimeDefault enabled: false image: digest: sha256:13bee3f5223883d3ca62fee7309ad02d22ec00ff0d7033e3e9aca7a9f60fd472 distroless: true image: ingress-nginx/opentelemetry registry: registry.k8s.io tag: v20230721-3e2062ee5 name: opentelemetry resources: {} podAnnotations: {} podLabels: {} podSecurityContext: {} priorityClassName: "" proxySetHeaders: {} publishService: enabled: true pathOverride: "" readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 replicaCount: 1 reportNodeInternalIp: false resources: requests: cpu: 100m memory: 90Mi scope: enabled: false namespace: "" namespaceSelector: "" service: annotations: service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*' appProtocol: true clusterIP: "" enableHttp: true enableHttps: true enabled: true external: enabled: true externalIPs: [] externalTrafficPolicy: "" internal: annotations: {} appProtocol: true clusterIP: "" enabled: false externalIPs: [] externalTrafficPolicy: "" ipFamilies: - IPv4 ipFamilyPolicy: SingleStack loadBalancerClass: "" loadBalancerIP: "" loadBalancerSourceRanges: [] nodePorts: http: "" https: "" tcp: {} udp: {} ports: {} sessionAffinity: "" targetPorts: {} type: "" ipFamilies: - IPv4 ipFamilyPolicy: SingleStack labels: {} loadBalancerClass: "" loadBalancerIP: "" loadBalancerSourceRanges: [] nodePorts: http: "" https: "" tcp: {} udp: {} ports: http: 80 https: 443 sessionAffinity: "" targetPorts: http: http https: https type: LoadBalancer shareProcessNamespace: false sysctls: {} tcp: annotations: {} configMapNamespace: "" terminationGracePeriodSeconds: 300 tolerations: [] topologySpreadConstraints: [] udp: annotations: {} configMapNamespace: "" updateStrategy: {} watchIngressWithoutClass: false defaultBackend: affinity: {} autoscaling: annotations: {} enabled: false maxReplicas: 2 minReplicas: 1 targetCPUUtilizationPercentage: 50 targetMemoryUtilizationPercentage: 50 containerSecurityContext: {} enabled: false existingPsp: "" extraArgs: {} extraConfigMaps: [] extraEnvs: [] extraVolumeMounts: [] extraVolumes: [] image: allowPrivilegeEscalation: false image: defaultbackend-amd64 pullPolicy: IfNotPresent readOnlyRootFilesystem: true registry: registry.k8s.io runAsNonRoot: true runAsUser: 65534 seccompProfile: type: RuntimeDefault tag: "1.5" labels: {} livenessProbe: failureThreshold: 3 initialDelaySeconds: 30 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 minAvailable: 1 minReadySeconds: 0 name: defaultbackend networkPolicy: enabled: false nodeSelector: kubernetes.io/os: linux podAnnotations: {} podLabels: {} podSecurityContext: {} port: 8080 priorityClassName: "" readinessProbe: failureThreshold: 6 initialDelaySeconds: 0 periodSeconds: 5 successThreshold: 1 timeoutSeconds: 5 replicaCount: 1 resources: {} service: annotations: {} externalIPs: [] loadBalancerSourceRanges: [] servicePort: 80 type: ClusterIP serviceAccount: automountServiceAccountToken: true create: true name: "" tolerations: [] updateStrategy: {} dhParam: "" imagePullSecrets: [] namespaceOverride: "" podSecurityPolicy: enabled: false portNamePrefix: "" rbac: create: true scope: false revisionHistoryLimit: 10 serviceAccount: annotations: {} automountServiceAccountToken: true create: true name: "" tcp: {} udp: {}Current State of the controller: N/A
kubectl describe ingressclasses
kubectl -n <ingresscontrollernamespace> get all -A -o wide
kubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>
kubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>
Current state of ingress object, if applicable: N/A
kubectl -n <appnamespace> get all,ing -o wide
kubectl -n <appnamespace> describe ing <ingressname>
Others: N/A
kubectl describe ...
of any custom configmap(s) created and in useHow to reproduce this issue: N/A
The text was updated successfully, but these errors were encountered: