Skip to content
Snippets Groups Projects
Select Git revision
  • jon-ci-rename-and-tidy-up-some-ci-scripts-e5391ed0
  • ash2k/move-end
  • master default protected
  • master-trigger-branch
  • fix-layer4-nginx-example
  • 487059-cell-configuration
  • pguinoiseau/registry-extra-containers
  • 8-7-stable protected
  • 8-8-stable protected
  • 8-9-stable protected
  • fix-missing-podlabels
  • 5098-upgrade-redis-7.x
  • jdrpereira-master-patch-a654
  • readme-links
  • cf-clarify-primary-secondary-network-requirements
  • 5953-registry-migration-psql-crt
  • jts/oci-eh
  • fix-dev-com-dependency-on-gitlab-build-images
  • 8-6-stable protected
  • cb-update-prom
  • v8.7.8
  • v8.8.4
  • v8.9.1
  • v8.8.3
  • v8.7.7
  • v8.9.0
  • v8.6.5
  • v8.7.6
  • v8.8.2
  • v8.6.4
  • v8.7.5
  • v8.8.1
  • v8.8.0
  • v8.7.4
  • v8.5.5
  • v8.6.3
  • v8.7.3
  • v8.7.2
  • v8.7.1
  • v8.7.0
40 results

nginx-ingress

  • Clone with SSH
  • Clone with HTTPS
  • ingress-nginx

    ingress-nginx Ingress controller for Kubernetes using NGINX as a reverse proxy and load balancer

    To use, add the kubernetes.io/ingress.class: nginx annotation to your Ingress resources.

    This chart bootstraps an ingress-nginx deployment on a Kubernetes cluster using the Helm package manager.

    Prerequisites

    • Kubernetes v1.19+

    Get Repo Info

    helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
    helm repo update

    Install Chart

    Important: only helm3 is supported

    helm install [RELEASE_NAME] ingress-nginx/ingress-nginx

    The command deploys ingress-nginx on the Kubernetes cluster in the default configuration.

    See configuration below.

    See helm install for command documentation.

    Uninstall Chart

    helm uninstall [RELEASE_NAME]

    This removes all the Kubernetes components associated with the chart and deletes the release.

    See helm uninstall for command documentation.

    Upgrading Chart

    helm upgrade [RELEASE_NAME] [CHART] --install

    See helm upgrade for command documentation.

    Upgrading With Zero Downtime in Production

    By default the ingress-nginx controller has service interruptions whenever it's pods are restarted or redeployed. In order to fix that, see the excellent blog post by Lindsay Landry from Codecademy: Kubernetes: Nginx and Zero Downtime in Production.

    Migrating from stable/nginx-ingress

    There are two main ways to migrate a release from stable/nginx-ingress to ingress-nginx/ingress-nginx chart:

    1. For Nginx Ingress controllers used for non-critical services, the easiest method is to uninstall the old release and install the new one
    2. For critical services in production that require zero-downtime, you will want to:
      1. Install a second Ingress controller
      2. Redirect your DNS traffic from the old controller to the new controller
      3. Log traffic from both controllers during this changeover
      4. Uninstall the old controller once traffic has fully drained from it
      5. For details on all of these steps see Upgrading With Zero Downtime in Production

    Note that there are some different and upgraded configurations between the two charts, described by Rimas Mocevicius from JFrog in the "Upgrading to ingress-nginx Helm chart" section of Migrating from Helm chart nginx-ingress to ingress-nginx. As the ingress-nginx/ingress-nginx chart continues to update, you will want to check current differences by running helm configuration commands on both charts.

    Configuration

    See Customizing the Chart Before Installing. To see all configurable options with detailed comments, visit the chart's values.yaml, or run these configuration commands:

    helm show values ingress-nginx/ingress-nginx

    PodDisruptionBudget

    Note that the PodDisruptionBudget resource will only be defined if the replicaCount is greater than one, else it would make it impossible to evacuate a node. See gh issue #7127 for more info.

    Prometheus Metrics

    The Nginx ingress controller can export Prometheus metrics, by setting controller.metrics.enabled to true.

    You can add Prometheus annotations to the metrics service using controller.metrics.service.annotations. Alternatively, if you use the Prometheus Operator, you can enable ServiceMonitor creation using controller.metrics.serviceMonitor.enabled. And set controller.metrics.serviceMonitor.additionalLabels.release="prometheus". "release=prometheus" should match the label configured in the prometheus servicemonitor ( see kubectl get servicemonitor prometheus-kube-prom-prometheus -oyaml -n prometheus)

    ingress-nginx nginx_status page/stats server

    Previous versions of this chart had a controller.stats.* configuration block, which is now obsolete due to the following changes in nginx ingress controller:

    • In 0.16.1, the vts (virtual host traffic status) dashboard was removed
    • In 0.23.0, the status page at port 18080 is now a unix socket webserver only available at localhost. You can use curl --unix-socket /tmp/nginx-status-server.sock http://localhost/nginx_status inside the controller container to access it locally, or use the snippet from nginx-ingress changelog to re-enable the http server

    ExternalDNS Service Configuration

    Add an ExternalDNS annotation to the LoadBalancer service:

    controller:
      service:
        annotations:
          external-dns.alpha.kubernetes.io/hostname: kubernetes-example.com.

    AWS L7 ELB with SSL Termination

    Annotate the controller as shown in the nginx-ingress l7 patch:

    controller:
      service:
        targetPorts:
          http: http
          https: http
        annotations:
          service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:XX-XXXX-X:XXXXXXXXX:certificate/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXX
          service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
          service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
          service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600'

    AWS route53-mapper

    To configure the LoadBalancer service with the route53-mapper addon, add the domainName annotation and dns label:

    controller:
      service:
        labels:
          dns: "route53"
        annotations:
          domainName: "kubernetes-example.com"

    Additional Internal Load Balancer

    This setup is useful when you need both external and internal load balancers but don't want to have multiple ingress controllers and multiple ingress objects per application.

    By default, the ingress object will point to the external load balancer address, but if correctly configured, you can make use of the internal one if the URL you are looking up resolves to the internal load balancer's URL.

    You'll need to set both the following values:

    controller.service.internal.enabled controller.service.internal.annotations

    If one of them is missing the internal load balancer will not be deployed. Example you may have controller.service.internal.enabled=true but no annotations set, in this case no action will be taken.

    controller.service.internal.annotations varies with the cloud service you're using.

    Example for AWS:

    controller:
      service:
        internal:
          enabled: true
          annotations:
            # Create internal ELB
            service.beta.kubernetes.io/aws-load-balancer-internal: "true"
            # Any other annotation can be declared here.

    Example for GCE:

    controller:
      service:
        internal:
          enabled: true
          annotations:
            # Create internal LB. More informations: https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing
            # For GKE versions 1.17 and later
            networking.gke.io/load-balancer-type: "Internal"
            # For earlier versions
            # cloud.google.com/load-balancer-type: "Internal"
            
            # Any other annotation can be declared here. 

    Example for Azure:

    controller:
      service:
          annotations:
            # Create internal LB
            service.beta.kubernetes.io/azure-load-balancer-internal: "true"
            # Any other annotation can be declared here.

    Example for Oracle Cloud Infrastructure:

    controller:
      service:
          annotations:
            # Create internal LB
            service.beta.kubernetes.io/oci-load-balancer-internal: "true"
            # Any other annotation can be declared here.

    An use case for this scenario is having a split-view DNS setup where the public zone CNAME records point to the external balancer URL while the private zone CNAME records point to the internal balancer URL. This way, you only need one ingress kubernetes object.

    Optionally you can set controller.service.loadBalancerIP if you need a static IP for the resulting LoadBalancer.

    Ingress Admission Webhooks

    With nginx-ingress-controller version 0.25+, the nginx ingress controller pod exposes an endpoint that will integrate with the validatingwebhookconfiguration Kubernetes feature to prevent bad ingress from being added to the cluster. This feature is enabled by default since 0.31.0.

    With nginx-ingress-controller in 0.25.* work only with kubernetes 1.14+, 0.26 fix this issue

    Helm Error When Upgrading: spec.clusterIP: Invalid value: ""

    If you are upgrading this chart from a version between 0.31.0 and 1.2.2 then you may get an error like this:

    Error: UPGRADE FAILED: Service "?????-controller" is invalid: spec.clusterIP: Invalid value: "": field is immutable

    Detail of how and why are in this issue but to resolve this you can set xxxx.service.omitClusterIP to true where xxxx is the service referenced in the error.

    As of version 1.26.0 of this chart, by simply not providing any clusterIP value, invalid: spec.clusterIP: Invalid value: "": field is immutable will no longer occur since clusterIP: "" will not be rendered.