Updated on 2026-03-26 GMT+08:00

Selecting a Network

CCE uses proprietary, high-performance pod networking add-ons to support the tunnel, Cloud Native 2.0, and VPC networks.

After a cluster is created, the type of network cannot be changed.

  • A tunnel network is an independent pod network that is constructed by using VXLAN tunnels based on the underlying VPC network. This model applies to common scenarios. VXLAN encapsulates Ethernet packets as UDP packets for tunnel transmission. While there may be some performance overhead, packet encapsulation and tunnel transmission allow for greater interoperability and compatibility with advanced features, such as network policy-based isolation, in most typical scenarios.
    Figure 1 Container tunnel network
  • A VPC network seamlessly combines VPC routing with the underlying network, making it ideal for high-performance scenarios. However, the maximum number of nodes allowed in a cluster is determined by the VPC route quota. Each node in a cluster that uses a VPC network is assigned a CIDR block with a fixed number of IP addresses. VPC network performance is superior to container tunnel networks because there is no tunnel encapsulation overhead. In addition, as routes destined for nodes and containers are added to a VPC route table, containers can be directly accessed from outside the cluster.
    Figure 2 VPC network
  • A Cloud Native 2.0 network deeply integrates the VPC network interfaces, uses the VPC CIDR block to allocate container IP addresses, and supports passthrough networking to pods through load balancers to deliver high performance.
    Figure 3 Cloud Native network 2.0

The table below lists the differences between these types of networks.

Table 1 Comparison of different types of networks

Category

Tunnel Network

VPC Network

Cloud Native Network 2.0

Application scenarios

  • Low performance requirements: As a container tunnel network requires additional VXLAN tunnel encapsulation, it has about 5% to 15% of performance loss when compared with the other two types of pod networks. So, the container tunnel networks apply to the scenarios where high performance is not needed, such as web applications, and middle end and backend services with a small number of access requests.
  • Large-scale networks: Different from a VPC network that is limited by the VPC route quota, a container tunnel network does not have any restriction on the infrastructure. In addition, the container tunnel network controls the broadcast domain at the node level. A cluster using a tunnel network supports a maximum of 2,000 nodes.
  • High performance requirements: As no tunnel encapsulation is required, a VPC network delivers the performance close to that of a VPC when compared with a container tunnel network. So, the VPC networks apply to scenarios where high performance is needed, such as AI and big data computing.
  • Small- and medium-scale networks: Due to the limitation on VPC route tables, it is recommended that the number of nodes in a cluster be less than or equal to 1,000.
  • High performance requirements: Cloud Native 2.0 networks use VPC networks to build pod networks. There is no need for tunnel encapsulation or NAT between container communications. This makes Cloud Native 2.0 networks ideal for scenarios where high bandwidth and low latency are needed, such as live streaming and e-commerce flash sales.
  • Large-scale networks: A cluster using a Cloud Native 2.0 network supports a maximum of 2,000 ECS nodes and 100,000 pods.

Core technology

OVS

IPvlan and VPC route

VPC network interfaces and supplementary network interfaces

Applicable cluster

CCE standard cluster

CCE standard cluster

CCE Turbo cluster

Pod network isolation

Kubernetes-native network policies for pods

Not supported

Security group-based isolation for pods

Interconnecting pods to a load balancer

Interconnected through a NodePort

Interconnected through a NodePort

Directly interconnected using a dedicated load balancer

Interconnected using a shared load balancer through a NodePort

Pod IP address management

  • Separate pod CIDR blocks are needed. Pod CIDR blocks cannot overlap with any VPC CIDR blocks.
  • After a cluster is created, the pod CIDR block cannot be expanded. To avoid insufficient IP addresses, you are advised to set the subnet mask of the pod CIDR block to a maximum of 19 bits.
  • Separate pod CIDR blocks are needed. Pod CIDR blocks cannot overlap with any VPC CIDR blocks.
  • You can add multiple pod CIDR blocks.
  • After a cluster is created, you can add more pod CIDR blocks. For details, see Adding a Container CIDR Block for a Cluster That Uses a VPC Network.
  • A fixed subnet is first assigned to each node from the pod CIDR block. All pod IP addresses on each node are then assigned from the dedicated subnet.
  • You can specify a VPC subnet as the pod CIDR block.
  • After a cluster is created, you can add more pod CIDR blocks. For details, see Adding or Deleting the Default Pod Subnet of a CCE Turbo Cluster.
  • Pod IP addresses are directly allocated from a VPC, consuming many IP addresses. You are advised to plan a large VPC CIDR block in advance.

Maximum number of pods on a node

It is determined by the maximum number of pods that can be created on a node (the kubelet parameter maxPods). For details, see Maximum Number of Pods on a Node.

It is determined by the smaller value between the following options:

It is determined by the smaller value between the following options:

Network performance

There is certain performance loss due to VXLAN encapsulation.

The performance is so good that it is comparable to that of the host network because there is no tunnel encapsulation, and cross-node traffic is forwarded through VPC routers. But there is a loss caused by NAT.

There is no performance loss because the pod network is integrated with the VPC network.

Network scale

A maximum of 2,000 nodes is supported.

It is suitable for small- and medium-scale networks due to the limitation on VPC route tables. It is recommended that the number of nodes in a cluster be less than or equal to 1,000.

Each time a node is added to the cluster, a route is added to the VPC route tables, including the default and custom ones. So, the cluster scale is limited by the VPC route tables. Evaluate the cluster scale before creating it. For details about route tables, see Notes and Constraints.

A maximum of 2,000 nodes is supported.

In a cluster that uses a Cloud Native 2.0 network, pod IP addresses are assigned from the VPC CIDR block, so the number of pods is restricted by this CIDR block. Evaluate the cluster scale limitations before creating it.

Support for IPv4/IPv6

Supported

Not supported

Supported

  1. The scale of a cluster that uses the VPC network model is limited by the custom routes of the VPC. Therefore, you need to estimate the number of required nodes before creating a cluster.
  2. By default, VPC routing network supports direct communication between containers and hosts in the same VPC. If a peering connection policy is configured between the VPC and another VPC, the containers can directly communicate with hosts on the peer VPC. In addition, in hybrid networking scenarios such as Direct Connect and VPN, communication between containers and hosts on the peer end can also be achieved with proper planning.