Help Center/ Cloud Container Engine/ User Guide/ Network/ Ingresses/ Comparison Between ELB Ingress and Nginx Ingress
Updated on 2025-01-07 GMT+08:00

Comparison Between ELB Ingress and Nginx Ingress

CCE clusters can use Nginx Ingress or ELB Ingress to access a layer-7 load balancer. Nginx Ingress is an open-source community add-on that is selected for use. CCE regularly updates its features and fixes bugs. ELB Ingress is a proprietary add-on that operates in full hosting mode and can access both shared and dedicated load balancers. This section describes the differences between Nginx Ingress and ELB Ingress.

Introduction

  • Nginx Ingress is a community add-on that is open-source and optimized based on the NGINX Ingress Controller. It provides a variety of ingress configurations and is the best choice if you need to extensively customize your gateway.
  • ELB Ingress is fully hosted and backed by ELB, making it O&M-free. It can handle tens of millions of concurrent connections and millions of new connections. ELB Ingress supports the interconnection with both shared and dedicated load balancers.

Typical Application Scenarios

Type

Feature

Nginx Ingress

  • Standard configurations
  • Extensive gateway customization
  • Canary release and blue-green deployment of cloud native applications

ELB Ingress

  • Hosted gateway that is highly available and O&M-free
  • Layer 7 high-performance auto scaling of cloud native applications
  • Canary release and blue-green deployment of cloud native applications
  • Isolated resources for dedicated use. A load balancer deployed in a single AZ can handle up to 20 million concurrent connections, making it ideal for managing a large volume of requests.

Functions

Item

Nginx Ingress

ELB Ingress

Positioning

Layer 7 traffic governance offers various advanced routing functions.

  • Layer 7 traffic governance offers various advanced routing functions. It seamlessly incorporates cloud-native technologies to deliver fully managed load balancing services that are O&M-free, highly available, high-performance, ultra-secure, and support multiple protocols.
  • Computing resources can be scaled to handle traffic surges.
  • ELB can handle tens of millions of concurrent connections and millions of new connections.

Basic routing

  • Routing can be based on content and source IP addresses.
  • HTTP header modification, redirection, rewriting, rate limiting, cross-region routing, and sticky sessions are available.
  • Forwarding rules can be configured for both requests and responses, and the rules for responses can be configured using extended Snippet.
  • Forwarding rules are matched based on the longest path. If multiple paths are matched, the longest forwarding path is prioritized.
  • Routing can be based on content and source IP addresses.
  • HTTP header modification, redirection, rewriting, rate limiting, and sticky sessions are available.
  • Forwarding rules can be configured for both requests and responses.
  • Forwarding rules are prioritized in descending order. If multiple paths are matched, a lower value indicates a higher priority.

Protocol

  • HTTP and HTTPS
  • WebSocket, WSS, and gRPC
  • HTTP and HTTPS
  • gRPC

Configuration modification

  • Processes must be reloaded for non-backend endpoint changes. This causes loss to persistent connections.
  • Lua supports hot updates of endpoint changes.
  • Processes must be reloaded for a Lua modification.

The declarative OpenAPI between cloud services enables the dynamic loading of modified configurations to ELB.

Authentication

  • Basic authentication
  • OAuth

TLS authentication

Performance

  • Both system and Nginx parameters require manual optimization for performance tuning.
  • To ensure proper system running, you must configure a proper number of replicas and resource limits. For more information, see Creating an Nginx Ingress on the Console.

ELB can handle tens of millions of concurrent connections and millions of new connections.

Observability

  • Log collection through Access Log
  • Monitoring and alarm configuration through Prometheus
  • Log access for cloud services through interconnected LTS
  • Auditing key operations
  • Metrics-backed monitoring through interconnected Cloud Eye
  • Alarm rules configurable through interconnected Cloud Eye

O&M

  • Bring-your-own component maintenance and periodic version synchronization from the community
  • Scaling through HPA
  • Proactive configuration for optimization
  • Fully managed and O&M-free
  • Configuration-free automatic scaling for ultra-large capacity
  • Auto scaling based on service traffic

Security

  • HTTPS
  • Blocklists and trustlists
  • SSL-integrated HTTPS for full-link HTTPS, SNI multi-certificate, RSA, ECC dual-certification, TLS 1.3, and TLS algorithm suites
  • WAF
  • Anti-DDoS
  • Blocklists and trustlists
  • Custom security policies

Service governance

  • Kubernetes-backed service discovery
  • Canary release
  • Traffic limit for service HA
  • Kubernetes-backed service discovery
  • Canary release