Selecting a Network
CCE uses proprietary, high-performance pod networking add-ons to support the tunnel, Cloud Native 2.0, and VPC networks.

After a cluster is created, the type of network cannot be changed.
- A tunnel network is an independent pod network that is constructed by using VXLAN tunnels based on the underlying VPC network. This model applies to common scenarios. VXLAN encapsulates Ethernet packets as UDP packets for tunnel transmission. While there may be some performance overhead, packet encapsulation and tunnel transmission allow for greater interoperability and compatibility with advanced features, such as network policy-based isolation, in most typical scenarios. Figure 1 Container tunnel network

- A VPC network seamlessly combines VPC routing with the underlying network, making it ideal for high-performance scenarios. However, the maximum number of nodes allowed in a cluster is determined by the VPC route quota. Each node in a cluster that uses a VPC network is assigned a CIDR block with a fixed number of IP addresses. VPC network performance is superior to container tunnel networks because there is no tunnel encapsulation overhead. In addition, as routes destined for nodes and containers are added to a VPC route table, containers can be directly accessed from outside the cluster. Figure 2 VPC network

- A Cloud Native 2.0 network deeply integrates the VPC network interfaces, uses the VPC CIDR block to allocate container IP addresses, and supports passthrough networking to pods through load balancers to deliver high performance. Figure 3 Cloud Native network 2.0

The table below lists the differences between these types of networks.
Category | Tunnel Network | VPC Network | Cloud Native Network 2.0 |
|---|---|---|---|
Application scenarios |
|
|
|
Core technology | OVS | IPvlan and VPC route | VPC network interfaces and supplementary network interfaces |
Applicable cluster | CCE standard cluster | CCE standard cluster | CCE Turbo cluster |
Pod network isolation | Kubernetes-native network policies for pods | Not supported | Security group-based isolation for pods |
Interconnecting pods to a load balancer | Interconnected through a NodePort | Interconnected through a NodePort | Directly interconnected using a dedicated load balancer Interconnected using a shared load balancer through a NodePort |
Pod IP address management |
|
|
|
Maximum number of pods on a node | It is determined by the maximum number of pods that can be created on a node (the kubelet parameter maxPods). For details, see Maximum Number of Pods on a Node. | It is determined by the smaller value between the following options:
| It is determined by the smaller value between the following options:
|
Network performance | There is certain performance loss due to VXLAN encapsulation. | The performance is so good that it is comparable to that of the host network because there is no tunnel encapsulation, and cross-node traffic is forwarded through VPC routers. But there is a loss caused by NAT. | There is no performance loss because the pod network is integrated with the VPC network. |
Network scale | A maximum of 2,000 nodes is supported. | It is suitable for small- and medium-scale networks due to the limitation on VPC route tables. It is recommended that the number of nodes in a cluster be less than or equal to 1,000. Each time a node is added to the cluster, a route is added to the VPC route tables, including the default and custom ones. So, the cluster scale is limited by the VPC route tables. Evaluate the cluster scale before creating it. For details about route tables, see Notes and Constraints. | A maximum of 2,000 nodes is supported. In a cluster that uses a Cloud Native 2.0 network, pod IP addresses are assigned from the VPC CIDR block, so the number of pods is restricted by this CIDR block. Evaluate the cluster scale limitations before creating it. |
Support for IPv4/IPv6 | Supported | Not supported | Supported |

- The scale of a cluster that uses the VPC network model is limited by the custom routes of the VPC. Therefore, you need to estimate the number of required nodes before creating a cluster.
- By default, VPC routing network supports direct communication between containers and hosts in the same VPC. If a peering connection policy is configured between the VPC and another VPC, the containers can directly communicate with hosts on the peer VPC. In addition, in hybrid networking scenarios such as Direct Connect and VPN, communication between containers and hosts on the peer end can also be achieved with proper planning.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot
