Help Center/ Elastic Load Balance/ Best Practices/ ELB and Cloud-native Applications
Updated on 2025-11-14 GMT+08:00

ELB and Cloud-native Applications

A load balancer can work as an entry point for incoming traffic to CCE instances. It can distribute service requests from clients across CCE pods or containers.

A load balancer can be used as:

  1. A LoadBalancer Service that can process TCP and UDP traffic at Layer 4 and HTTP and HTTPS traffic at Layer 7. Some Layer 7 functions can be used, such as certificate offloading, Layer 7 access logs, and Layer 7 Cloud Eye monitoring metrics.
  2. An ELB ingress that can handle HTTP/HTTPS traffic at Layer 7 and support more advanced application-layer functions, such as Layer 7 routing, certificate offloading, Layer 7 access logs, and Layer 7 monitoring metrics.

CCE provides powerful elasticity and automation capabilities, which can quickly start backend workloads. ELB and CCE can be used together for elastic and HA scenarios such as dark launches.

Using a LoadBalancer Service to Distribute External Requests Across Pods in CCE Cluster

A LoadBalancer Service adds an external load balancer on the top of a NodePort Service and distributes external traffic to multiple pods within a cluster. A LoadBalancer Service provides higher reliability than a NodePort Service. It automatically assigns an external IP address to allow access from the clients. LoadBalancer Services process TCP and UDP traffic at Layer 4 (transport layer) of the OSI model. They can be extended to support Layer 7 (application layer) capabilities to manage HTTP and HTTPS traffic.

If cloud applications require a stable, easy-to-manage entry for external access, you can create a LoadBalancer Service. For example, in a production environment, you can use LoadBalancer Services to expose public-facing services such as web applications and API services to the Internet. These services often need to handle heavy traffic while maintaining high availability. The access address of a LoadBalancer Service is in the format of {EIP-of-load-balancer}:{access-port}, for example, 10.117.117.117:80.

Figure 1 LoadBalancer Service

If you need to configure a LoadBalancer Service, see .

Configuring Advanced Load Balancing Functions Using Annotations

LoadBalancer Services provide Layer 4 network access. You can add annotations to a YAML file to use some advanced CCE functions.

If you need to configure more advanced functions when creating a LoadBalancer Service, see .

Using ELB Ingresses in CCE

A Service is generally used to forward access requests based on TCP and UDP and provide layer-4 load balancing for clusters. However, in actual scenarios, if there is a large number of HTTP/HTTPS access requests on the application layer, the Service cannot meet the forwarding requirements. Therefore, the Kubernetes cluster provides an HTTP-based access mode, ingress.

An ingress is an independent resource in the Kubernetes cluster and defines rules for forwarding external access traffic. As shown in Figure 2, you can define forwarding rules based on domain names and paths to implement fine-grained distribution of access traffic.

Figure 2 Ingress diagram

For details about how to configure an ELB Ingress in a CCE cluster to control traffic, see

Configuring Advanced LoadBalancer Ingress Functions Using Annotations

You can add annotations to a YAML file for more advanced ingress functions. For details, see .

Migrating Data from a Bring-Your-Own Nginx Ingress to a LoadBalancer Ingress

LoadBalancer ingresses are implemented based on Huawei Cloud ELB. LoadBalancer ingresses offer better traffic management compared to bring-your-own Nginx ingresses because of the following advantages:

  • Fully managed and O&M-free: ELB is a fully managed cloud service that requires no worker node, making it completely O&M-free.
  • High availability: ELB enables active-active disaster recovery within a city across AZs. This ensures seamless traffic switchover in the event of a failure. Dedicated load balancers provide a comprehensive health check system to ensure that incoming traffic is only routed to healthy backend servers, improving the availability of your applications.
  • Auto scaling: ELB can automatically scale to handle traffic spikes.
  • Ultra-high performance: A single load balancer supports up to 1 million queries per second and tens of millions of concurrent connections.
  • Integration with cloud services: ELB can run with various cloud services, such as WAF.
  • Hot updates of configurations: Configuration changes can be updated in real-time without requiring a process reload. This helps to prevent disruptions to persistent connections.

You can .

Deploying Nginx Ingress Controller in Custom Mode

NGINX Ingress Controller is a popular open-source ingress controller in the industry and is widely used. Large-scale clusters require multiple ingress controllers to distinguish different traffic. For example, if some services in a cluster need to be accessed through a public network ingress, but some internal services cannot be accessed through a public network and can only be accessed by other services in the same VPC, you can deploy two independent Nginx Ingress Controllers and bind two different load balancers.

Figure 3 Application scenario of multiple Nginx ingresses

You can deploy multiple NGINX Ingress Controllers in the same cluster by referring to .