Help Center/ Cloud Container Engine/ Best Practices/ Networking/ Deploying NGINX Ingress Controllers Using Charts/ Deploying Multiple NGINX Ingress Controllers in Custom Mode
Updated on 2026-03-10 GMT+08:00

Deploying Multiple NGINX Ingress Controllers in Custom Mode

Background

NGINX Ingress Controller is a popular open-source ingress controller in the industry and is widely used. Large-scale clusters require multiple ingress controllers to distinguish different traffic. For example, if some services in a cluster need to be accessed through an ingress with EIP bound, but some internal services cannot be accessed through the Internet and can only be accessed by other services in the same VPC, you can deploy two separate NGINX Ingress Controllers and associate them with two different load balancers.

Figure 1 Application scenario of multiple Nginx ingresses

Solution

You can use either of the following solutions to deploy multiple NGINX ingress controllers in the same cluster.

  • (Recommended) Install the NGINX Ingress Controller add-on and deploy multiple instances in the same cluster with just a few clicks. For details, see Installing Multiple NGINX Ingress Controllers.

    For clusters v1.23, the add-on of 2.2.52 or later must be installed. For clusters v1.23 or later, the add-on of 2.5.4 or later must be installed.

  • Install the open-source Helm package. The parameters to be configured in this solution are complex. You need to configure the ingress-class parameter (default value: nginx) to declare the listening ranges of different NGINX Ingress Controllers. In this way, when creating an ingress, you can select different NGINX Ingress Controllers to distinguish traffic.

Prerequisites

  • You need to assign an EIP to the node for pulling images from the Internet during chart installation.

Notes and Constraints

  • If multiple NGINX Ingress Controllers are deployed, each controller needs to interconnect with a load balancer. Ensure that the load balancer has at least two listeners and ports 80 and 443 are not occupied by the listeners. If dedicated load balancers are used, specify the network type.
  • When the NGINX Ingress Controller template and image provided by the community are used, CCE does not provide additional maintenance for service loss caused by community software defects. Exercise caution when serving commercial purposes.

Deploying Multiple NGINX Ingress Controllers

You can perform the following operations to deploy multiple NGINX Ingress Controllers in a cluster.

  1. Obtain a chart.

    Go to the chart page, select a proper version, and download the Helm chart in .tgz format. This section uses the chart of version 4.4.2 as an example. This chart applies to CCE clusters of v1.21 or later. The configuration items in the chart may vary according to the version. The configuration in this section takes effect only for the chart of 4.4.2 version.

  2. Upload the chart.

    1. Log in to the CCE console and click the cluster name to access the cluster console. In the navigation pane, choose App Templates and click Upload Chart in the upper right corner.
    2. Click Select File, select the chart to be uploaded, and click Upload.

  3. Customize the value.yaml file.

    You can create a value.yaml configuration file on the local PC to configure workload installation parameters. During workload installation, you only need to import this configuration file for customized installation. Other unspecified parameters will use the default settings.

    The configuration content is as follows:
    controller:
      image:
        repository: registry.k8s.io/ingress-nginx/controller
        registry: ""
        image: ""
        tag: "v1.5.1"  # Controller version
        digest: ""
      ingressClassResource:
        name: ccedemo         # The name of each NGINX Ingress Controller in the same cluster must be unique and cannot be nginx or cce.
        controllerValue: "k8s.io/ingress-nginx-demo"  # The listening identifier of each NGINX Ingress Controller in the same cluster must be unique and cannot be set to k8s.io/ingress-nginx.
      ingressClass: ccedemo   # The name of each NGINX Ingress Controller in the same cluster must be unique and cannot be nginx or cce.
      service: 
        annotations: 
          kubernetes.io/elb.id: 5083f225-9bf8-48fa-9c8b-67bd9693c4c0     # Load balancer ID
          kubernetes.io/elb.class: performance  # This annotation is required only for dedicated load balancers.
      config:
        keep-alive-requests: 100
      extraVolumeMounts: # Mount the /etc/localtime file on the node to synchronize the time zone.
        - name: localtime
          mountPath: /etc/localtime
          readOnly: true
      extraVolumes:
        - name: localtime
          type: Hostpath
          hostPath:
            path: /etc/localtime 
      admissionWebhooks: # Disable webhook authentication.
        enabled: false
        patch:
          enabled: false
      resources: # Set the controller's resource limit. You can use other values.
        requests:
          cpu: 200m
          memory: 200Mi
    defaultBackend: # Set defaultBackend.
      enabled: true
      image: 
        repository: registry.k8s.io/defaultbackend-amd64
        registry: ""
        image: ""
        tag: "1.5"
        digest: ""

    For details about the parameters, see Table 1.

  4. Create a release.

    1. Log in to the CCE console and click the cluster name to access the cluster console. In the navigation pane, choose App Templates.
    2. In the list of uploaded charts, click Install.
    3. Configure Release Name, Namespace, and Select Version.
    4. In the Configuration File area, click Add, select the YAML file created locally, and click Install.
    5. On the Releases tab, view the status of the release.

Verification

Deploy a workload and configure the newly deployed NGINX Ingress Controller to provide network access for the workload.

  1. Create an Nginx workload.

    1. Log in to the CCE console and click the cluster name to access the cluster console. In the navigation pane, choose Workloads. In the right pane, click Create from YAML in the upper right corner.
    2. Enter the following content and click Submit.
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: nginx
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: nginx
        strategy:
          type: RollingUpdate
        template:
          metadata:
            labels:
              app: nginx
          spec:
            containers:
            - image: nginx    # If an open-source image is used, enter the image name. If you use an image in My Images, obtain the image address from SWR.
              imagePullPolicy: Always
              name: nginx
            imagePullSecrets:
            - name: default-secret
      ---
      apiVersion: v1
      kind: Service
      metadata:
        labels:
          app: nginx
        name: nginx
      spec:
        ports:
        - name: service0
          port: 80                 # Port for accessing a Service.
          protocol: TCP           # Protocol used for accessing a Service. The value can be TCP or UDP.
          targetPort: 80           # Port used by the service to access the target container. In this example, the Nginx image uses port 80 by default.
        selector:                   # Label selector. A Service selects a pod based on the label and forwards the requests for accessing the Service to the pod.
          app: nginx
        type: ClusterIP            # Type of a Service. ClusterIP indicates that a Service is only reachable from within the cluster.

  2. Create an ingress and use the newly deployed NGINX Ingress Controller to provide network access.

    1. In the navigation pane, choose Services & Ingresses. Click the Ingresses tab and click Create from YAML in the upper right corner.

      When interconnecting with the NGINX Ingress Controller that is not deployed using the add-on, you can create an ingress only through YAML.

    2. Enter the following content and click Submit.
      For clusters of v1.23 or later:
      apiVersion: networking.k8s.io/v1 
      kind: Ingress 
      metadata: 
        name: ingress-test
        namespace: default 
      spec: 
        ingressClassName: ccedemo  # Enter the ingressClass of the newly created NGINX Ingress Controller.
        rules: 
        - host: foo.bar.com
          http: 
            paths: 
            - path: / 
              pathType: ImplementationSpecific   # The matching depends on IngressClass.
              backend: 
                service: 
                  name: nginx    # Replace it with the name of the destination Service.
                  port: 
                    number: 80   # Replace it with the port of the destination Service.
              property: 
                ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH  

      For clusters earlier than v1.23:

      apiVersion: networking.k8s.io/v1beta1
      kind: Ingress 
      metadata: 
        name: tomcat-t1 
        namespace: test 
        annotations: 
          kubernetes.io/ingress.class: ccedemo  # Enter the ingressClass of the newly created NGINX Ingress Controller.
      spec: 
        rules: 
          - host: foo.bar.com
            http: 
              paths: 
                - path: / 
                  pathType: ImplementationSpecific 
                  backend: 
                    serviceName: nginx  # Replace it with the name of the destination Service.
                    servicePort: 80     # Replace it with the port of the destination Service.
                  property: 
                    ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH

  3. Log in to the node and access the application using the NGINX Ingress Controller add-on of the cluster and the newly deployed NGINX Ingress Controller, respectively.

    • Use the newly deployed NGINX Ingress Controller to access the application (the Nginx page is expected to be displayed). 192.168.114.60 is the load balancer address of the newly deployed NGINX Ingress Controller.
      curl -H "Host: foo.bar.com" http://192.168.114.60

    • Use the NGINX Ingress Controller add-on to access the application (404 is expected to be returned). 192.168.9.226 is the load balancer address of the NGINX Ingress Controller add-on.
      curl -H "Host: foo.bar.com" http://192.168.9.226

Parameters

Table 1 NGINX Ingress Controller parameters

Parameter

Description

controller.image.repository

ingress-nginx image address. It is recommended that this parameter be set to the same as the NGINX Ingress Controller add-on image provided by CCE. You can also customize the parameter.

  • NGINX Ingress Controller add-on image: You can view its image address in the YAML file of the installed add-on.
  • Custom: The custom path must ensure that the image can be pulled.

controller.image.registry

Domain name of an image repository. This parameter must be set together with controller.image.image.

If controller.image.repository has been set, you do not need to set this parameter. You are advised to leave controller.image.registry and controller.image.image empty.

controller.image.image

Image name. This parameter must be set together with controller.image.registry.

If controller.image.repository has been set, you do not need to set this parameter. You are advised to leave controller.image.registry and controller.image.image empty.

controller.image.tag

ingress-nginx image version. It is recommended that this parameter be set to the same as the NGINX Ingress Controller add-on image provided by CCE. You can also customize the image.

The image version of the NGINX Ingress Controller add-on can be viewed in the YAML file of the installed add-on and needs to be replaced based on the add-on version.

controller.ingressClass

Name of the IngressClass of the NGINX Ingress Controller.

NOTE:

The name of each NGINX Ingress Controller in the same cluster must be unique and cannot be set to nginx or cce. nginx is the default listening identifier of NGINX Ingress Controller in a cluster, and cce is the configuration of LoadBalancer Ingress Controller.

Example: ccedemo

controller.image.digest

You are advised to leave this parameter empty. If this parameter is specified, pulling the NGINX Ingress Controller add-on image provided by CCE may fail.

controller.ingressClassResource.name

The parameter value must be the same as that of ingressClass.

Example: ccedemo

controller.ingressClassResource.controllerValue

The listening identifier of each NGINX Ingress Controller in the same cluster must be unique and cannot be k8s.io/ingress-nginx, which is the default listening identifier of NGINX Ingress Controller.

Example: k8s.io/ingress-nginx-demo

controller.config

Nginx configuration parameter. For details, see ConfigMaps. Parameter settings out of the range do not take effect.

You are advised to add the following configurations:

"keep-alive-requests": "100"

controller.extraInitContainers

init container, which is executed before the main container is started and can be used to initialize pod parameters.

For details about parameter configuration examples, see Parameter Optimization in High-Concurrency Scenarios.

controller.admissionWebhooks.enabled

Whether to enable admission webhooks to verify the validity of ingresses. This prevents an NGINX Ingress Controller from continuously reloading resources due to incorrect configurations, which may cause service interruption.

Set this parameter to false. To enable this function, see the example in Admission Webhook Configuration.

controller.admissionWebhooks.patch.enabled

Whether to enable admission webhooks. Set this parameter to false.

controller.service.annotations

A key-value pair. The load balancer ID needs to be added, as shown in the following:

kubernetes.io/elb.id: 5083f225-9bf8-48fa-9c8b-67bd9693c4c0

For dedicated load balancers, add elb.class as follows:

kubernetes.io/elb.class: performance

controller.resources.requests.cpu

The quantity of CPU resources requested by the Nginx controller. You can also configure it as required.

controller.resources.requests.memory

The quantity of memory resources requested by the Nginx controller. You can also configure it as required.

defaultBackend.image.repository

The default-backend image address. It is advised to set it to the same as the NGINX Ingress Controller add-on image provided by CCE. You can also configure it as required.

  • NGINX Ingress Controller add-on image: You can view its image address in the YAML file of the installed add-on.
  • Custom: If you use a custom path, ensure that images can be pulled from it.

defaultBackend.image.tag

The default-backend image version. It is advised to set it to the same as the NGINX Ingress Controller add-on image provided by CCE. You can also configure it as required.

For details about additional parameters, see ingress-nginx.