Help Center/Cloud Container Engine/Best Practices/Cluster/Suggestions on CCE Cluster Selection
Updated on 2026-03-26 GMT+08:00

Suggestions on CCE Cluster Selection

When you use CCE to create a Kubernetes cluster, there are multiple configuration options and terms. This section compares the key configurations for CCE clusters and provides recommendations to help you create a cluster that better suits your needs.

Cluster Types

CCE supports multiple cluster types for your service requirements. The differences between the cluster types are listed in the table below.

Cluster Type

CCE standard

CCE Turbo

CCE Autopilot

Positioning

Standard clusters that provide highly reliable, secure containers for commercial use

Next-generation clusters designed for Cloud Native 2.0, with accelerated compute, networking, and scheduling

Serverless clusters that you do not need to manage nodes and are billed based on actual CPU and memory usage

In such clusters, no node deployment, management, and security maintenance are needed.

Application Scenario

For users who expect to use container clusters to manage applications, obtain elastic compute resources, and enable simplified management of compute, network, and storage resources

For users who have higher requirements on performance, resource utilization, and full-scenario coverage

For users whose services suffer frequent traffic surges, such as users in the online education and e-commerce sectors

Network

For scenarios where there are not so many containers and high performance is not needed, the following networks are provided:

  • Tunnel networks
  • VPC networks

For details, see Overview.

Cloud Native 2.0 networks: for scenarios where there are many containers and high performance is needed

A maximum of 2,000 nodes is supported.

Cloud Native 2.0 networks: for scenarios where there are many containers and high performance is needed

Host Ports (hostPort) for Pods

Supported

Not supported

Not supported

Network Performance

The container network is overlaid with the VPC network, causing certain performance loss.

The VPC network and container network are flattened into one for zero performance loss.

The VPC network and container network are flattened into one for zero performance loss.

Network Isolation

  • Tunnel networks: network policies for communications within a cluster
  • VPC networks: After DataPlane V2 is enabled, CCE supports network policies. For details, see DataPlane V2.

Pods can be associated with security groups for isolation. This ensures consistent security isolation both within and outside a cluster.

Pods can be associated with security groups for isolation. This ensures consistent security isolation both within and outside a cluster.

Container Resource Isolation

cgroups are used to isolate common containers.

  • VM-level isolation is supported for secure containers that run only on physical machines.
  • cgroups are used to isolate common containers.

VM-level isolation

Edge Infrastructure Management

Not supported

Support for management of edge cloud resources. For details, see Using Edge Cloud Resources in a Remote CCE Turbo Cluster.

Not supported

Cluster Versions

Due to the fast iteration, many bugs are fixed and new features are added in the new Kubernetes versions. The old versions will be gradually eliminated. When creating a cluster, select the latest commercial version supported by CCE.

Networks

This section describes the types of networks supported by CCE clusters. You can select one based on your requirements.

After a cluster is created, the type of network cannot be changed.

Table 1 Comparison of different types of networks

Category

Tunnel Network

VPC Network

Cloud Native Network 2.0

Application scenarios

  • Low performance requirements: As a container tunnel network requires additional VXLAN tunnel encapsulation, it has about 5% to 15% of performance loss when compared with the other two types of pod networks. So, the container tunnel networks apply to the scenarios where high performance is not needed, such as web applications, and middle end and backend services with a small number of access requests.
  • Large-scale networks: Different from a VPC network that is limited by the VPC route quota, a container tunnel network does not have any restriction on the infrastructure. In addition, the container tunnel network controls the broadcast domain at the node level. A cluster using a tunnel network supports a maximum of 2,000 nodes.
  • High performance requirements: As no tunnel encapsulation is required, a VPC network delivers the performance close to that of a VPC when compared with a container tunnel network. So, the VPC networks apply to scenarios where high performance is needed, such as AI and big data computing.
  • Small- and medium-scale networks: Due to the limitation on VPC route tables, it is recommended that the number of nodes in a cluster be less than or equal to 1,000.
  • High performance requirements: Cloud Native 2.0 networks use VPC networks to build pod networks. There is no need for tunnel encapsulation or NAT between container communications. This makes Cloud Native 2.0 networks ideal for scenarios where high bandwidth and low latency are needed, such as live streaming and e-commerce flash sales.
  • Large-scale networks: A cluster using a Cloud Native 2.0 network supports a maximum of 2,000 ECS nodes and 100,000 pods.

Core technology

OVS

IPvlan and VPC route

VPC network interfaces and supplementary network interfaces

Applicable cluster

CCE standard cluster

CCE standard cluster

CCE Turbo cluster

Pod network isolation

Kubernetes-native network policies for pods

Not supported

Security group-based isolation for pods

Interconnecting pods to a load balancer

Interconnected through a NodePort

Interconnected through a NodePort

Directly interconnected using a dedicated load balancer

Interconnected using a shared load balancer through a NodePort

Pod IP address management

  • Separate pod CIDR blocks are needed. Pod CIDR blocks cannot overlap with any VPC CIDR blocks.
  • After a cluster is created, the pod CIDR block cannot be expanded. To avoid insufficient IP addresses, you are advised to set the subnet mask of the pod CIDR block to a maximum of 19 bits.
  • Separate pod CIDR blocks are needed. Pod CIDR blocks cannot overlap with any VPC CIDR blocks.
  • You can add multiple pod CIDR blocks.
  • After a cluster is created, you can add more pod CIDR blocks. For details, see Adding a Container CIDR Block for a Cluster That Uses a VPC Network.
  • A fixed subnet is first assigned to each node from the pod CIDR block. All pod IP addresses on each node are then assigned from the dedicated subnet.
  • You can specify a VPC subnet as the pod CIDR block.
  • After a cluster is created, you can add more pod CIDR blocks. For details, see Adding or Deleting the Default Pod Subnet of a CCE Turbo Cluster.
  • Pod IP addresses are directly allocated from a VPC, consuming many IP addresses. You are advised to plan a large VPC CIDR block in advance.

Maximum number of pods on a node

It is determined by the maximum number of pods that can be created on a node (the kubelet parameter maxPods). For details, see Maximum Number of Pods on a Node.

It is determined by the smaller value between the following options:

It is determined by the smaller value between the following options:

Network performance

There is certain performance loss due to VXLAN encapsulation.

The performance is so good that it is comparable to that of the host network because there is no tunnel encapsulation, and cross-node traffic is forwarded through VPC routers. But there is a loss caused by NAT.

There is no performance loss because the pod network is integrated with the VPC network.

Network scale

A maximum of 2,000 nodes is supported.

It is suitable for small- and medium-scale networks due to the limitation on VPC route tables. It is recommended that the number of nodes in a cluster be less than or equal to 1,000.

Each time a node is added to the cluster, a route is added to the VPC route tables, including the default and custom ones. So, the cluster scale is limited by the VPC route tables. Evaluate the cluster scale before creating it. For details about route tables, see Notes and Constraints.

A maximum of 2,000 nodes is supported.

In a cluster that uses a Cloud Native 2.0 network, pod IP addresses are assigned from the VPC CIDR block, so the number of pods is restricted by this CIDR block. Evaluate the cluster scale limitations before creating it.

Support for IPv4/IPv6

Supported

Not supported

Supported

For details, see Overview.

Cluster CIDR Blocks

There are node CIDR blocks, pod CIDR blocks, and Service CIDR blocks in CCE clusters. When planning networks, note that:

  • These types of CIDR blocks cannot overlap with each other. Otherwise, a conflict will occur. All subnets, including those created from the secondary CIDR block, in the VPC where a cluster resides cannot conflict with the pod CIDR block or Service CIDR block.
  • There are sufficient IP addresses in each CIDR block.
    • The IP addresses in a node CIDR block must match the cluster scale. Otherwise, nodes cannot be created due to insufficient IP addresses.
    • The IP addresses in a pod CIDR block must match the service scale. Otherwise, pods cannot be created due to insufficient IP addresses.

In complex scenarios, for example, multiple clusters use the same VPC or clusters are interconnected across VPCs, determine the number of VPCs, the number of subnets, the pod CIDR blocks, and the communication modes of Service CIDR blocks. For details, see Planning CIDR Blocks for a Cluster.

Service Forwarding

kube-proxy is a key component in a Kubernetes cluster. It is responsible for load balancing and forwarding between a Service and its backend pods.

CCE supports:

  • IPVS: allows higher throughput and faster forwarding. It applies to scenarios where the cluster scale is large or there are many Services.
  • iptables: the traditional kube-proxy mode. It applies to the scenario where there are not so many Services or there are many short concurrent connections on clients.

If high stability is required and there are fewer than 2,000 Services, iptables is recommended. In other scenarios, IPVS is recommended.

For details, see Comparing iptables and IPVS.

Node Flavors

The minimum flavor of a node is 2 vCPUs and 4-GiB memory. Evaluate the node flavor based on service requirements before configuring a node. However, using many small-flavor ECSs is not the right choice. The reasons are as follows:
  • They have fewer network resources. This may result in a single-point bottleneck.
  • Resources may be wasted. If each pod running on a small-flavor node requests a lot of resources, the node cannot run multiple pods and there may be idle resources on it.
Advantages of using large-flavor nodes are as follows:
  • They have higher network bandwidth. This ensures higher resource utilization for high-bandwidth applications.
  • Multiple pods can run on the same node. This ensures low network latency between pods.
  • You can pull images faster because after you pull an image, you can use it for multiple pods on the same node. You cannot do this on small-flavor ECSs. On such ECSs, you need to pull multiple images. You will have to spend more time on this especially during node scaling.

Additionally, select a proper vCPU/memory ratio based on your requirements. For example, if a service container with large memory but fewer CPUs is used, configure a node flavor with a vCPU/memory ratio of 1:4 to reduce resource waste.

In a CCE Turbo cluster, pods use network interfaces or supplementary network interfaces. The maximum number of pods that can be created on a node is related to the number of network interfaces that can be used on the node. So, you need to evaluate the number of network interfaces on a node. This determines the maximum number of pods that can run on the node. For details, see Maximum Number of Pods That Can Be Created on a Node. For details about the number of network interfaces supported by different node flavors, see Node Specifications.

Node Container Runtimes

CCE supports the containerd and Docker container runtimes. containerd is recommended for its shorter traces, fewer components, higher stability, and reduced consumption of node resources. As of Kubernetes 1.24, Dockershim has been removed and Docker is no longer supported as a container runtime. For details, see Kubernetes is Moving on From Dockershim: Commitments and Next Steps. CCE clusters of v1.27 do not support Docker.

Use containerd in typical scenarios. The Docker container runtimes are supported only in the following scenarios:

  • Docker in Docker (usually in CI scenarios)
  • Running the Docker commands on the nodes
  • Calling Docker APIs

Node OSs

Service container runtimes share the kernel and underlying calls of nodes. To ensure compatibility, select a Linux distribution version that is the same as or close to that of the final service container image for a node OS.