Updated on 2025-09-05 GMT+08:00

Overview

A container network assigns IP addresses to pods in a cluster and provides networking services. In CCE, you can select the following network models for your cluster:

  • Cloud Native Network 2.0 is a proprietary, next-generation model that combines the network interfaces and supplementary network interfaces of VPC. This allows you to bind network interfaces or supplementary network interfaces to pods, giving each pod unique IP address within the VPC. This model also has features like ELB passthrough networking and association of security groups and EIPs with pods. It is suitable for scenarios that have high requirements on the node scale, network performance, and security, such as high-performance computing and gaming.
  • The VPC network model seamlessly combines VPC routing with the underlying network, making it ideal for high-performance scenarios. However, the maximum number of nodes allowed in a cluster is determined by the VPC route quota. This model is suitable for small- and medium-scale networking.

    In the VPC network model, container CIDR blocks are separate from node CIDR blocks. To allocate IP addresses to pods running on a node in a cluster, each node in the cluster is allocated with a pod IP range for a fixed number of IP addresses. This network model outperforms the container tunnel network model in terms of performance because it does not have tunnel encapsulation overhead. When the VPC network model is used in a cluster, the routes between container CIDR blocks and VPC CIDR blocks are automatically configured in the VPC route table. This means that pods within the cluster can be accessed directly from cloud servers in the same VPC, even if they are outside the cluster.

  • The container tunnel network creates a separate network plane for containers by using tunnel encapsulation on the host network plane. This network model uses VXLAN for tunnel encapsulation and Open vSwitch as the virtual switch backend. VXLAN is a protocol that encapsulates Ethernet packets into UDP packets to transmit them through tunnels. Open vSwitch is an open-source virtual switch that provides functions such as network isolation and data forwarding.

    While there may be some performance costs, packet encapsulation and tunnel transmission allow for greater interoperability and compatibility with advanced features, such as network policy–based isolation, in most common scenarios.

Network Model Comparison

Table 1 describes the differences of network models supported by CCE.

After a cluster is created, the network model cannot be changed.

Table 1 Network model comparison

Dimension

Tunnel Network

VPC Network

Cloud Native Network 2.0

Application scenarios

  • Low requirements on performance: As the container tunnel network requires additional VXLAN tunnel encapsulation, it has about 5% to 15% of performance loss when compared with the other two container network models. Therefore, the container tunnel network applies to the scenarios that do not have high performance requirements, such as web applications, and middle-end and back-end services with a small number of access requests.
  • Large-scale networking: Different from the VPC network that is limited by the VPC route quota, the container tunnel network does not have any restriction on the infrastructure. In addition, the container tunnel network controls the broadcast domain to the node level. The container tunnel network supports a maximum of 2000 nodes.
  • High performance requirements: As no tunnel encapsulation is required, the VPC network model delivers the performance close to that of a VPC network when compared with the container tunnel network model. Therefore, the VPC network model applies to scenarios that have high requirements on performance, such as AI computing and big data computing.
  • Small- and medium-scale networks: Due to the limitation on VPC route tables, it is recommended that the number of nodes in a cluster be less than or equal to 1000.
  • High performance requirements: Cloud Native Network 2.0 uses VPC networks to construct container networks, eliminating the need for tunnel encapsulation or NAT required by container communications. This makes Cloud Native Network 2.0 ideal for scenarios that demand high bandwidth and low latency, such as live streaming and e-commerce flash sales.
  • Large-scale networking: Cloud Native Network 2.0 supports a maximum of 2,000 ECS nodes and 100,000 pods.

Core technology

OVS

IPVLAN and VPC route

VPC network interfaces/supplementary network interfaces

Applicable clusters

CCE standard cluster

CCE standard cluster

CCE Turbo cluster

Container network isolation

Kubernetes native NetworkPolicy for pods

No

Pods support security group isolation.

Interconnecting pods to a load balancer

Interconnected through a NodePort

Interconnected through a NodePort

Directly interconnected using a dedicated load balancer

Interconnected using a shared load balancer through a NodePort

Managing pod IP addresses

  • Separate container CIDR blocks are needed. Container CIDR blocks cannot overlap with VPC CIDR blocks.
  • After a cluster is created, the container CIDR block cannot be expanded. To avoid insufficient IP addresses, you are advised to set the subnet mask of the container CIDR block to a maximum of 19 bits.
  • Separate container CIDR blocks are needed. Container CIDR blocks cannot overlap with VPC CIDR blocks.
  • You can add multiple container CIDR blocks.
  • You can add container CIDR blocks after a cluster is created. For details, see Expanding the Container CIDR Block of a Cluster That Uses a VPC Network.
  • When pod IP addresses are allocated, a fixed IP address range is configured for each node from the container CIDR block. The IP addresses of all pods on a node are allocated from the IP address range.
  • You can specify a VPC subnet as the container CIDR block.
  • You can add container CIDR blocks after a cluster is created. For details, see Adding or Deleting the Default Pod Subnet of a CCE Turbo Cluster.
  • Pod IP addresses are directly allocated from the VPC, consuming many IP addresses. You are advised to plan a large VPC CIDR block in advance.

Maximum number of pods on a node

The value of the kubelet configuration parameter maxPods is used. For details, see Maximum Number of Pods on a Node.

The smaller value between the following two options is used:

The smaller value between the following two options is used:

Network performance

Performance loss due to VXLAN encapsulation

No tunnel encapsulation, and cross-node traffic forwarded through VPC routers (The performance is so good that is comparable to that of the host network, but there is a loss caused by NAT.)

Container network integrated with VPC network, eliminating performance loss

Networking scale

A maximum of 2,000 nodes are supported.

Suitable for small- and medium-scale networks due to the limitation on VPC route tables. It is recommended that the number of nodes be less than or equal to 1,000.

Each time a node is added to the cluster, a route is added to the VPC route tables. Evaluate the cluster scale that is limited by the VPC route tables before creating the cluster.

A maximum of 2,000 nodes are supported.

In a cluster that uses Cloud Native Network 2.0, container IP addresses are assigned from the VPC CIDR block, and the number of containers is also restricted by this CIDR block. Evaluate the cluster's scale limitations before creating it.

Helpful Links

  • When planning the VPC CIDR block, container CIDR block, and Service CIDR block of a cluster, you need to consider the current and future needs to avoid service interruptions or expansion limits caused by IP address exhaustion. You are advised to plan CIDR blocks before creating a cluster. For details, see Planning CIDR Blocks for a Cluster.