Updated on 2022-09-24 GMT+08:00

nginx-ingress

Introduction

Kubernetes uses kube-proxy to expose Services and provide load balancing. The implementation is at the transport layer. When it comes to Internet applications, where a bucket-load of information is generated, forwarding needs to be more fine-grained, precisely and flexibly controlled by policies and load balancers to deliver higher performance.

This is where ingresses enter. Ingresses provide application-layer forwarding functions, such as virtual hosts, load balancing, SSL proxy, and HTTP routing, for Services that can be directly accessed outside a cluster.

Kubernetes has officially released the Nginx-based ingress controller. nginx-ingress is an add-on that uses ConfigMaps to store Nginx configurations. The Nginx ingress controller generates Nginx configurations for an ingress and writes the configurations to the pod of Nginx through Kubernetes API. These configurations can be modified and updated by reloading.

The nginx-ingress add-on in CCE is implemented using the open-source community chart and image. CCE does not maintain the add-on. Therefore, it is not recommended that the nginx-ingress add-on be used commercially.

You can visit the open source community for more information.

  • When installing the add-on, you can add configurations by defining the Nginx configuration. The configurations take effect globally. This parameter is generated by configuring the nginx.conf file and affects all managed ingresses. You can search for related parameters in the ConfigMap. If the configured parameters are not included in the options listed in the ConfigMap, the configurations do not take effect.
  • Do not manually modify or delete the load balancer and listener that are automatically created by CCE. Otherwise, the workload will be abnormal. If you have modified or deleted them by mistake, you need to uninstall the nginx-ingress add-on and re-install it.

How nginx-ingress Works

nginx-ingress consists of the ingress object, ingress controller, and Nginx. The ingress controller assembles ingresses into the Nginx configuration file (nginx.conf) and reloads Nginx to make the changed configurations take effect. When it detects that the pod in a Service changes, it dynamically changes the upstream server group configuration of Nginx. In this case, the Nginx process does not need to be reloaded. Figure 1 shows how nginx-ingress works.

  • An ingress is a group of access rules that forward requests to specified Services based on domain names or URLs. Ingresses are stored in the object storage service etcd and are added, deleted, modified, and queried through APIs.
  • The ingress controller monitors the changes of resource objects such as ingresses, Services, endpoints, secrets (mainly TLS certificates and keys), nodes, and ConfigMaps in real time and automatically performs operations on Nginx.
  • Nginx implements load balancing and access control at the application layer.
Figure 1 Working principles of nginx-ingress

Constraints

  • kubernetes.io/ingress.class: "nginx" must be added to the annotation of the ingress created by calling an API.

Installing the Add-on

  1. Log in to the CCE console. In the navigation pane, choose Add-ons. On the Add-on Marketplace tab page, click Install Add-on under nginx-ingress.
  2. On the Install Add-on page, select the cluster and the add-on version, and click Next: Configuration.
  3. In the Configuration step, set the parameters listed in Table 1. Parameters marked with an asterisk (*) are mandatory.

    Table 1 nginx-ingress add-on parameters

    Parameter

    Description

    Add-on Specifications

    Select add-on specifications based on service requirements. You can customize resource specifications.

    Instances

    Number of pods that will be created to match the selected add-on specifications.

    Container

    CPU and memory quotas of the container allowed for the selected add-on specifications.

    NOTE:
    • Ensure that there are sufficient nodes in the cluster. If not, the add-on instance cannot be scheduled. In this case, you need to reinstall the add-on.
    • The request value must be less than or equal to the limit value. Otherwise, the creation fails.
    • You are advised to set the request value to be the same as the limit value. If nodes are insufficient, containers whose request value is less than the limit value are preferentially cleared.
    • For details about the performance results of different configurations, see the Nginx performance test report.

    ConfigMap Config

    The configuration takes effect globally. This parameter is generated by configuring the nginx.conf file and affects all managed ingresses. You can search for related parameters in the ConfigMap. If your configuration is not included in the options listed in the ConfigMap, your configuration will not take effect.

    • worker-processes: number of worker processes when Nginx provides web services for external systems. The default value is auto.
    • max-worker-connections: maximum number of concurrent connections of a single worker process. The default value is 16384.
    • keep-alive: timeout interval for a keep-alive connection, in unit of seconds. The default value is 75.

    Custom Header

    By default, Nginx filters out custom headers. This parameter allows you to redefine or add request headers to be sent to backend servers.

    Default backend enabled

    The nginx-ingress add-on provides the 404 backend service by default. If you need to customize the 404 backend service, enter a value in format of <namespace/serviceName>.

    Elastic Load Balancer

    You can select an existing public or private network load balancer. This function enables the traffic from a public or private network to be forwarded to the Service backing the add-on.

    Once this parameter is set, do not modify the configuration on the ELB console. Otherwise, the Service will be abnormal. If you have modified the configuration, uninstall the add-on and reinstall it.

    NOTE:
    • Ensure that the load balancer you select or create is in the same VPC as the cluster and routes requests over the Internet.
    • The load balancer has at least two listeners, and ports 80 and 443 are not occupied by listeners.

  4. After the configuration is complete, click Install. After the add-on is installed, click Back to Add-on List.
  5. On the Add-on Instance tab page, select the corresponding cluster. If the installed add-on is displayed and in running state, it has been successfully installed in the current cluster.

Upgrading the Add-on

  1. Log in to the CCE console. In the navigation pane, choose Add-ons. On the Add-on Instance tab page, click Upgrade under nginx-ingress.

    • If the Upgrade button is not available, the current add-on is already up-to-date and no upgrade is required.
    • During the upgrade, the nginx-ingress add-on of the original version on cluster nodes will be discarded, and the add-on of the target version will be installed.
    • After the upgrade, nginx-ingress will be restarted. Related services may be interrupted. Therefore, you are advised to upgrade the add-on during off-peak hours.

  2. On the Basic Information page, select the add-on version and click Next.
  3. In the Configuration step, set the parameters listed in Table 1. Parameters marked with an asterisk (*) are mandatory.
  4. After setting the parameters, click Upgrade to upgrade the add-on.

Uninstalling the Add-on

  1. Log in to the CCE console. In the navigation pane, choose Add-ons. On the Add-on Instance tab page, click Uninstall under nginx-ingress.
  2. In the dialog box displayed, click Yes to uninstall the add-on.