Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Situation Awareness
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive

Overview

Updated on 2025-02-18 GMT+08:00

Why We Need Ingresses

A Service is generally used to forward access requests based on TCP and UDP and provide layer-4 load balancing for clusters. However, in actual scenarios, if there is a large number of HTTP/HTTPS access requests on the application layer, the Service cannot meet the forwarding requirements. Therefore, the Kubernetes cluster provides an HTTP-based access mode, ingress.

An ingress is an independent resource in the Kubernetes cluster and defines rules for forwarding external access traffic. As shown in Figure 1, you can customize forwarding rules based on domain names and URLs to implement fine-grained distribution of access traffic.

Figure 1 Ingress diagram

Ingress Overview

Kubernetes uses ingress resources to define how incoming traffic should be handled, while the Ingress Controller is responsible for processing the actual traffic.

  • Ingress object: a set of access rules that forward requests to specified Services based on domain names or paths. It can be added, deleted, modified, and queried by calling APIs.
  • Ingress Controller: an executor for forwarding requests. It monitors the changes of resource objects such as ingresses, Services, endpoints, secrets (mainly TLS certificates and keys), nodes, and ConfigMaps in real time, parses rules defined by ingresses, and forwards requests to the target backend Services.
    The way of implementing Ingress Controllers varies depending on their vendors. CCE supports LoadBalancer Ingress Controllers and NGINX Ingress Controllers.
    • LoadBalancer Ingress Controllers are deployed on master nodes and they forward traffic based on the ELB. All policy configurations and forwarding behaviors are handled on the ELB.
    • NGINX Ingress Controllers are deployed in clusters using charts and images maintained by the Kubernetes community. They provide external access through NodePort and forward external traffic to other services in the cluster through Nginx. All traffic forwarding behaviors and forwarding objects are within the cluster.

Ingress Feature Comparison

Table 1 Comparison between ingress features

Feature

ELB Ingress Controller

Nginx Ingress Controller

O&M

O&M-free

Self-installation, upgrade, and maintenance

Performance

One ingress supports only one load balancer.

Multiple ingresses support one load balancer.

Enterprise-grade load balancers are used to provide high performance and high availability. Service forwarding is not affected in upgrade and failure scenarios.

Performance varies depending on the resource configuration of pods.

Dynamic loading is supported.

  • Processes must be reloaded for non-backend endpoint changes. This causes loss to persistent connections.
  • Lua supports hot updates of endpoint changes.
  • Processes must be reloaded for a Lua modification.

Component deployment

Deployed on the master node

Deployed on worker nodes, and operations costs required for the Nginx component

Route redirection

Supported

Supported

SSL configuration

Supported

Supported

Using ingress as a proxy for backend services

Supported

Supported, which can be implemented through backend-protocol: HTTPS annotations.

The LoadBalancer ingress is essentially different from the open source Nginx Ingress. Therefore, their supported Service types are different. For details, see Services Supported by LoadBalancer Ingresses.

LoadBalancer Ingress Controllers are deployed on master nodes. All policy configurations and forwarding behaviors are handled on the ELB. Load balancers outside the cluster can connect to nodes in the cluster only through the IP address of the VPC in non-passthrough networking scenarios. Therefore, LoadBalancer ingresses support only NodePort Services. However, in the passthrough networking scenario where a dedicated load balancer is used in a CCE Turbo cluster, ELB can directly forward traffic to pods in the cluster. In this case, the ingress can only interconnect with ClusterIP Services.

NGINX Ingress Controller runs in a cluster and is exposed as a Service through NodePort. Traffic is forwarded to other Services in the cluster through Nginx ingresses. The traffic forwarding behavior and forwarding object are in the cluster. Therefore, both ClusterIP and NodePort Services are supported.

In conclusion, LoadBalancer ingresses use enterprise-grade load balancers to forward traffic and delivers high performance and stability. NGINX Ingress Controller is deployed on cluster nodes, which consumes cluster resources but has better configurability.

Working Rules of LoadBalancer Ingress Controller

LoadBalancer Ingress Controller developed by CCE implements layer-7 network access for the internet and intranet (in the same VPC) based on ELB and distributes access traffic to the target Services using different paths.

LoadBalancer Ingress Controller is deployed on the master node and bound to the load balancer in the VPC where the cluster resides. Different domain names, ports, and forwarding policies can be configured for the same load balancer (with the same IP address). The working rules of LoadBalancer Ingress Controller are as follows:

  1. A user creates an ingress and configures a traffic access rule in the ingress, including the load balancer, access path, SSL, and backend Service port.
  2. When Ingress Controller detects that the ingress changes, it reconfigures the listener and backend server route on the ELB according to the traffic access rule.
  3. When a user attempts to access a workload, the ELB forwards the traffic to the target workload according to the configured forwarding rule.

The way LoadBalancer Ingress Controller works depends on the type of cluster and ELB being used. The following section describes the configuration process and network flow in various scenarios.

Figure 2 Working flow of a LoadBalancer ingress in a CCE standard cluster
Figure 3 Working flow of a LoadBalancer ingress in a CCE Turbo cluster where a shared load balancer is used

When a CCE Turbo cluster is used, pod IP addresses are directly allocated from the VPC. Dedicated load balancers enable passthrough networking to pods. When creating an ingress for external cluster access, you can use ELB to access a ClusterIP Service and use pods as the backend server of the ELB listener. In this way, external traffic can directly access the pods in the cluster without being forwarded by node ports.

Figure 4 Working flow of a LoadBalancer ingress in a CCE Turbo cluster where a dedicated load balancer is used
Figure 2 Working flow of a LoadBalancer ingress in a CCE standard cluster
Figure 3 Working flow of a LoadBalancer ingress in a CCE Turbo cluster where a shared load balancer is used

When a CCE Turbo cluster is used, pod IP addresses are directly allocated from the VPC. Dedicated load balancers enable passthrough networking to pods. When creating an ingress for external cluster access, you can use ELB to access a ClusterIP Service and use pods as the backend server of the ELB listener. In this way, external traffic can directly access the pods in the cluster without being forwarded by node ports.

Figure 4 Working flow of a LoadBalancer ingress in a CCE Turbo cluster where a dedicated load balancer is used

Working Rules of NGINX Ingress Controller

Nginx Ingress uses ELB as the traffic ingress. The NGINX Ingress Controller add-on is deployed in a cluster to balance traffic and control access.

NOTE:

NGINX Ingress Controller uses the charts and images provided by the open-source community, and issues may occur during usage. CCE periodically synchronizes the community version to fix known vulnerabilities. Check whether your service requirements can be met.

NGINX Ingress Controller is deployed on worker nodes through pods, which will result in O&M costs and Nginx component running overheads. Figure 5 shows the working rules of NGINX Ingress Controller.

  1. After you update ingress resources, NGINX Ingress Controller writes a forwarding rule defined in the ingress resources into the nginx.conf configuration file of Nginx.
  2. The built-in Nginx component reloads the updated configuration file to modify and update the Nginx forwarding rule.
  3. When traffic accesses a cluster, the traffic is first forwarded by the created load balancer to the Nginx component in the cluster. Then, the Nginx component forwards the traffic to each workload based on the forwarding rule.
Figure 5 Working rules of NGINX Ingress Controller

Services Supported by Ingresses

LoadBalancer and the open-source Nginx ingresses support different Services due to their implementation principles.

Table 2 Services supported by LoadBalancer ingresses

Cluster Type

ELB Type

ClusterIP

NodePort

CCE standard cluster

Shared load balancer

Not supported

Supported

Dedicated load balancer

Not supported

Supported

CCE Turbo cluster

Shared load balancer

Not supported

Supported

Dedicated load balancer

Supported

Not supported

NOTE:

ENIs are separately bound to pods in a CCE Turbo cluster, and ELB directly connects to pods. Therefore, NodePort access is not available.

Table 3 Services supported by Nginx ingress

Cluster Type

ELB Type

ClusterIP

NodePort

CCE standard cluster

Shared load balancer

Supported

Supported

Dedicated load balancer

Supported

Supported

CCE Turbo cluster

Shared load balancer

Supported

Supported

Dedicated load balancer

Supported

Supported

Table 2 Services supported by LoadBalancer ingresses

Cluster Type

ELB Type

ClusterIP

NodePort

CCE standard cluster

Shared load balancer

Not supported

Supported

Dedicated load balancer

Not supported

Supported

CCE Turbo cluster

Shared load balancer

Not supported

Supported

Dedicated load balancer

Supported

Not supported

NOTE:

ENIs are separately bound to pods in a CCE Turbo cluster, and ELB directly connects to pods. Therefore, NodePort access is not available.

Table 3 Services supported by Nginx ingress

Cluster Type

ELB Type

ClusterIP

NodePort

CCE standard cluster

Shared load balancer

Supported

Supported

Dedicated load balancer

Supported

Supported

CCE Turbo cluster

Shared load balancer

Supported

Supported

Dedicated load balancer

Supported

Supported

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback