Updated on 2024-01-04 GMT+08:00

Overview

Why We Need Ingresses

A Service is generally used to forward access requests based on TCP and UDP and provide layer-4 load balancing for clusters. However, in actual scenarios, if there is a large number of HTTP/HTTPS access requests on the application layer, the Service cannot meet the forwarding requirements. Therefore, the Kubernetes cluster provides an HTTP-based access mode, ingress.

An ingress is an independent resource in the Kubernetes cluster and defines rules for forwarding external access traffic. As shown in Figure 1, you can customize forwarding rules based on domain names and URLs to implement fine-grained distribution of access traffic.

Figure 1 Ingress diagram

The following describes the ingress-related definitions:

  • Ingress object: a set of access rules that forward requests to specified Services based on domain names or URLs. It can be added, deleted, modified, and queried by calling APIs.
  • Ingress Controller: an executor for request forwarding. It monitors the changes of resource objects such as ingresses, Services, endpoints, secrets (mainly TLS certificates and keys), nodes, and ConfigMaps in real time, parses rules defined by ingresses, and forwards requests to the corresponding backend Services.

Ingress Controllers provided by different vendors are implemented in different ways. Based on the types of load balancers, Ingress Controllers are classified into ELB Ingress Controller and Nginx Ingress Controller. Both of them are supported in CCE. ELB Ingress Controller forwards traffic through ELB. Nginx Ingress Controller uses the templates and images maintained by the Kubernetes community to forward traffic through the Nginx component.

Ingress Feature Comparison

Table 1 Comparison between ingress features

Feature

ELB Ingress Controller

Nginx Ingress Controller

O&M

O&M-free

Self-installation, upgrade, and maintenance

Performance

One ingress supports only one load balancer.

Multiple ingresses support one load balancer.

Enterprise-grade load balancers are used to provide high performance and high availability. Service forwarding is not affected in upgrade and failure scenarios.

Performance varies depending on the resource configuration of pods.

Dynamic loading is supported.

  • Processes must be reloaded for non-backend endpoint changes, which causes loss to persistent connections.
  • Lua supports hot updates of endpoint changes.
  • Processes must be reloaded for a Lua modification.

Component deployment

Deployed on the master node

Deployed on worker nodes, and operations costs required for the Nginx component

Route redirection

Not supported

Supported

SSL configuration

Supported

Supported

Using ingress as a proxy for backend services

Supported

Supported, which can be implemented through backend-protocol: HTTPS annotations.

The ELB ingress is essentially different from the open source Nginx ingress. Therefore, their supported Service types are different. For details, see Services Supported by Ingresses.

ELB Ingress Controller is deployed on a master node. All policies and forwarding behaviors are configured on the ELB side. Load balancers outside the cluster can connect to nodes in the cluster only through the IP address of the VPC in non-passthrough networking scenarios. Therefore, ELB Ingress supports only NodePort Services. However, in the passthrough networking scenario (CCE Turbo cluster + dedicated load balancer), ELB can directly forward traffic to pods in the cluster. In this case, the ingress can only interconnect with ClusterIP Services.

Nginx Ingress Controller runs in a cluster and is exposed as a Service through NodePort. Traffic is forwarded to other Services in the cluster through Nginx-ingress. The traffic forwarding behavior and forwarding object are in the cluster. Therefore, both ClusterIP and NodePort Services are supported.

In conclusion, ELB Ingress uses enterprise-grade load balancers to forward traffic and delivers high performance and stability. Nginx Ingress Controller is deployed on cluster nodes, which consumes cluster resources but has better configurability.

Working Principle of ELB Ingress Controller

ELB Ingress Controller developed by CCE implements layer-7 network access for the internet and intranet (in the same VPC) based on ELB and distributes access traffic to the corresponding Services using different URLs.

ELB Ingress Controller is deployed on the master node and bound to the load balancer in the VPC where the cluster resides. Different domain names, ports, and forwarding policies can be configured for the same load balancer (with the same IP address). Figure 2 shows the working principle of ELB Ingress Controller.

  1. A user creates an ingress object and configures a traffic access rule in the ingress, including the load balancer, URL, SSL, and backend service port.
  2. When Ingress Controller detects that the ingress object changes, it reconfigures the listener and backend server route on the ELB side according to the traffic access rule.
  3. When a user accesses a workload, the traffic is forwarded to the corresponding backend service port based on the forwarding policy configured on ELB, and then forwarded to each associated workload through the Service.
Figure 2 Working principle of ELB Ingress Controller

Working Principle of Nginx Ingress Controller

An Nginx ingress uses ELB as the traffic ingress. The nginx-ingress add-on is deployed in a cluster to balance traffic and control access.

The nginx-ingress add-on in CCE is implemented using the open-source community chart and image. CCE does not maintain the add-on. Therefore, it is not recommended that the nginx-ingress add-on be used commercially.

You can visit the open source community for more information.

Nginx Ingress Controller is deployed on worker nodes through pods, which will result in O&M costs and Nginx component running overheads. Figure 3 shows the working principles of Nginx Ingress Controller.

  1. After you update ingress resources, Nginx Ingress Controller writes a forwarding rule defined in the ingress resources into the nginx.conf configuration file of Nginx.
  2. The built-in Nginx component reloads the updated configuration file to modify and update the Nginx forwarding rule.
  3. When traffic accesses a cluster, the traffic is first forwarded by the created load balancer to the Nginx component in the cluster. Then, the Nginx component forwards the traffic to each workload based on the forwarding rule.
Figure 3 Working principle of Nginx Ingress Controller

Services Supported by Ingresses

Table 2 lists the services supported by ELB Ingresses.
Table 2 Services supported by ELB Ingresses

Cluster Type

ELB Type

ClusterIP

NodePort

CCE standard cluster

Shared load balancer

Not supported

Supported

Dedicated load balancer

Not supported (Failed to access the dedicated load balancers because no ENI is bound to the associated pod of the ClusterIP Service.)

Supported

CCE Turbo cluster

Shared load balancer

Not supported

Supported

Dedicated load balancer

Supported

Not supported (Failed to access the dedicated load balancers because an ENI has been bound to the associated pod of the NodePort Service.)

Table 3 lists the services supported by Nginx Ingresses.
Table 3 Services supported by Nginx Ingresses

Cluster Type

ELB Type

ClusterIP

NodePort

CCE standard cluster

Shared load balancer

Supported

Supported

Dedicated load balancer

Supported

Supported

CCE Turbo cluster

Shared load balancer

Supported

Supported

Dedicated load balancer

Supported

Supported