Updated on 2024-01-29 GMT+08:00

Overview

The container network assigns IP addresses to pods in a cluster and provides networking services. In CCE, you can select the following network models for your cluster:

Network Model Comparison

Table 1 describes the differences of network models supported by CCE.

After a cluster is created, the network model cannot be changed.

Table 1 Network model comparison

Dimension

Tunnel Network

VPC Network

Cloud Native Network 2.0

Application scenarios

  • Common container service scenarios
  • Scenarios that do not have high requirements on network latency and bandwidth
  • Scenarios that have high requirements on network latency and bandwidth
  • Containers can communicate with VMs using a microservice registration framework, such as Dubbo and CSE.
  • Scenarios that have high requirements on network latency, bandwidth, and performance
  • Containers can communicate with VMs using a microservice registration framework, such as Dubbo and CSE.

Core technology

OVS

IPvlan and VPC route

VPC ENI/sub-ENI

Applicable clusters

CCE standard cluster

CCE standard cluster

CCE Turbo cluster

Network isolation

Kubernetes native NetworkPolicy for pods

No

Pods support security group isolation.

Passthrough networking

No

No

Yes

IP address management

  • The container CIDR block is allocated separately.
  • CIDR blocks are divided by node and can be dynamically allocated (CIDR blocks can be dynamically added after being allocated.)
  • The container CIDR block is allocated separately.
  • CIDR blocks are divided by node and statically allocated (the CIDR block cannot be changed after a node is created).

The container CIDR block is divided from the VPC subnet and does not need to be allocated separately.

Network performance

Performance loss due to VXLAN encapsulation

No tunnel encapsulation. Cross-node packets are forwarded through VPC routers, delivering performance equivalent to that of the host network.

The container network is integrated with the VPC network, eliminating performance loss.

Networking scale

A maximum of 2000 nodes are supported.

Suitable for small- and medium-scale networks due to the limitation on VPC routing tables. It is recommended that the number of nodes be less than or equal to 1000.

Each time a node is added to the cluster, a route is added to the VPC routing tables (including the default and custom ones). Therefore, the cluster scale is limited by the VPC routing tables. For details about routing tables, see Constraints.

A maximum of 2000 nodes are supported.

  1. The scale of a cluster that uses the VPC network model is limited by the custom routes of the VPC. Therefore, estimate the number of required nodes before creating a cluster.
  2. The scale of a cluster that uses the Cloud Native Network 2.0 model depends on the size of the VPC subnet CIDR block selected for the network attachment definition. Before creating a cluster, evaluate the scale of your cluster.
  3. By default, VPC routing network supports direct communication between containers and hosts in the same VPC. If a peering connection policy is configured between the VPC and another VPC, the containers can directly communicate with hosts on the peer VPC. In addition, in hybrid networking scenarios such as Direct Connect and VPN, communication between containers and hosts on the peer end can also be achieved with proper planning.
  4. Do not change the mask of the primary CIDR block on the VPC after a cluster is created. Otherwise, the network will be abnormal.