Updated on 2024-07-02 GMT+08:00

Overview

The container network assigns IP addresses to pods in a cluster and provides networking services. In CCE, you can select the following network models for your cluster:

Network Model Comparison

Table 1 describes the differences of network models supported by CCE.

After a cluster is created, the network model cannot be changed.

Table 1 Network model comparison

Dimension

Tunnel Network

VPC Network

Cloud Native Network 2.0

Application scenarios

  • Low requirements on performance: As the container tunnel network requires additional VXLAN tunnel encapsulation, it has about 5% to 15% of performance loss when compared with the other two container network models. Therefore, the container tunnel network applies to the scenarios that do not have high performance requirements, such as web applications, and middle-end and back-end services with a small number of access requests.
  • Large-scale networking: Different from the VPC network that is limited by the VPC route quota, the container tunnel network does not have any restriction on the infrastructure. In addition, the container tunnel network controls the broadcast domain to the node level. The container tunnel network supports a maximum of 2000 nodes.
  • High performance requirements: As no tunnel encapsulation is required, the VPC network model delivers the performance close to that of a VPC network when compared with the container tunnel network model. Therefore, the VPC network model applies to scenarios that have high requirements on performance, such as AI computing and big data computing.
  • Small- and medium-scale networks: Due to the limitation on VPC routing tables, it is recommended that the number of nodes in a cluster be less than or equal to 1000.
  • High performance requirements: Cloud Native 2.0 networks use VPC networks to construct container networks, eliminating the need for tunnel encapsulation or NAT when containers communicate. This makes Cloud Native 2.0 networks ideal for scenarios that demand high bandwidth and low latency, such as live streaming and e-commerce flash sales.
  • Large-scale networking: Cloud Native 2.0 networks support a maximum of 2,000 ECS nodes and 100,000 pods.

Core technology

OVS

IPvlan and VPC route

VPC ENI/sub-ENI

Applicable clusters

CCE standard cluster

CCE standard cluster

CCE Turbo cluster

Container network isolation

Kubernetes native NetworkPolicy for pods

No

Pods support security group isolation.

Interconnecting pods to a load balancer

Interconnected through a NodePort

Interconnected through a NodePort

Directly interconnected using a dedicated load balancer

Interconnected using a shared load balancer through a NodePort

Managing container IP addresses

  • Separate container CIDR blocks needed
  • Container CIDR blocks divided by node and dynamically added after being allocated
  • Separate container CIDR blocks needed
  • Container CIDR blocks divided by node and statically allocated (the allocated CIDR blocks cannot be changed after a node is created)

Container CIDR blocks divided from a VPC subnet (You do not need to configure separate container CIDR blocks.)

Network performance

Performance loss due to VXLAN encapsulation

No tunnel encapsulation, and cross-node traffic forwarded through VPC routers (The performance is so good that is comparable to that of the host network, but there is a loss caused by NAT.)

Container network integrated with VPC network, eliminating performance loss

Networking scale

A maximum of 2000 nodes are supported.

Suitable for small- and medium-scale networks due to the limitation on VPC routing tables. It is recommended that the number of nodes be less than or equal to 1000.

Each time a node is added to the cluster, a route is added to the VPC routing tables (including the default and custom ones). Evaluate the cluster scale that is limited by the VPC routing tables before creating the cluster. For details about routing tables, see Constraints.

A maximum of 2000 nodes are supported.

In a cloud-native network 2.0 cluster, containers' IP addresses are assigned from VPC CIDR blocks, and the number of containers supported is restricted by these blocks. Evaluate the cluster's scale limitations before creating it.