Overview
A container network assigns IP addresses to pods in a cluster and provides networking services. In CCE, you can select the following network models for your cluster:
- The VPC network model seamlessly combines VPC routing with the underlying network, making it ideal for high-performance scenarios. However, the maximum number of nodes allowed in a cluster is determined by the VPC route quota. This model is suitable for small- and medium-scale networking.
In the VPC network model, container CIDR blocks are separate from node CIDR blocks. To allocate IP addresses to pods running on a node in a cluster, each node in the cluster is allocated with a pod IP range for a fixed number of IP addresses. This network model outperforms the container tunnel network model in terms of performance because it does not have tunnel encapsulation overhead. When the VPC network model is used in a cluster, the routes between container CIDR blocks and VPC CIDR blocks are automatically configured in the VPC route table. This means that pods within the cluster can be accessed directly from cloud servers in the same VPC, even if they are outside the cluster.
- The container tunnel network creates a separate network plane for containers by using tunnel encapsulation on the host network plane. This network model uses VXLAN for tunnel encapsulation and Open vSwitch as the virtual switch backend. VXLAN is a protocol that encapsulates Ethernet packets into UDP packets to transmit them through tunnels. Open vSwitch is an open-source virtual switch that provides functions such as network isolation and data forwarding.
While there may be some performance costs, packet encapsulation and tunnel transmission allow for greater interoperability and compatibility with advanced features, such as network policy–based isolation, in most common scenarios.
Network Model Comparison
Table 1 describes the differences of network models supported by CCE.
After a cluster is created, the network model cannot be changed.
|
Dimension |
Tunnel Network |
VPC Network |
|---|---|---|
|
Application scenarios |
|
|
|
Core technology |
OVS |
IPVLAN and VPC route |
|
Applicable clusters |
CCE standard cluster |
CCE standard cluster |
|
Container network isolation |
Kubernetes native NetworkPolicy for pods |
No |
|
Interconnecting pods to a load balancer |
Interconnected through a NodePort |
Interconnected through a NodePort |
|
Managing pod IP addresses |
|
|
|
Maximum number of pods on a node |
The value of the kubelet configuration parameter maxPods is used. For details, see Maximum Number of Pods on a Node. |
The smaller value between the following two options is used:
|
|
Network performance |
Performance loss due to VXLAN encapsulation |
No tunnel encapsulation, and cross-node traffic forwarded through VPC routers (The performance is so good that is comparable to that of the host network, but there is a loss caused by NAT.) |
|
Networking scale |
A maximum of 2,000 nodes are supported. |
Suitable for small- and medium-scale networks due to the limitation on VPC route tables. It is recommended that the number of nodes be less than or equal to 1,000. Each time a node is added to the cluster, a route is added to the VPC route tables. Evaluate the cluster scale that is limited by the VPC route tables before creating the cluster. |
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot