Suggestions on CCE Cluster Selection
When you use CCE to create a Kubernetes cluster, there are multiple configuration options and terms. This section compares the key configurations for CCE clusters and provides recommendations to help you create a cluster that better suits your needs.
Cluster Types
CCE supports multiple cluster types for your service requirements. The differences between the cluster types are listed in the table below.
Cluster Type | CCE standard | CCE Turbo | CCE Autopilot |
|---|---|---|---|
Positioning | Standard clusters that provide highly reliable, secure containers for commercial use | Next-generation clusters designed for Cloud Native 2.0, with accelerated compute, networking, and scheduling | Serverless clusters that you do not need to manage nodes and are billed based on actual CPU and memory usage In such clusters, no node deployment, management, and security maintenance are needed. |
Application Scenario | For users who expect to use container clusters to manage applications, obtain elastic compute resources, and enable simplified management of compute, network, and storage resources | For users who have higher requirements on performance, resource utilization, and full-scenario coverage | For users whose services suffer frequent traffic surges, such as users in the online education and e-commerce sectors |
Network | For scenarios where there are not so many containers and high performance is not needed, the following networks are provided:
For details, see Overview. | Cloud Native 2.0 networks: for scenarios where there are many containers and high performance is needed A maximum of 2,000 nodes is supported. | Cloud Native 2.0 networks: for scenarios where there are many containers and high performance is needed |
Host Ports (hostPort) for Pods | Supported | Not supported | Not supported |
Network Performance | The container network is overlaid with the VPC network, causing certain performance loss. | The VPC network and container network are flattened into one for zero performance loss. | The VPC network and container network are flattened into one for zero performance loss. |
Network Isolation |
| Pods can be associated with security groups for isolation. This ensures consistent security isolation both within and outside a cluster. | Pods can be associated with security groups for isolation. This ensures consistent security isolation both within and outside a cluster. |
Container Resource Isolation | cgroups are used to isolate common containers. |
| VM-level isolation |
Edge Infrastructure Management | Not supported | Support for management of edge cloud resources. For details, see Using Edge Cloud Resources in a Remote CCE Turbo Cluster. | Not supported |
Cluster Versions
Due to the fast iteration, many bugs are fixed and new features are added in the new Kubernetes versions. The old versions will be gradually eliminated. When creating a cluster, select the latest commercial version supported by CCE.
Networks
This section describes the types of networks supported by CCE clusters. You can select one based on your requirements.

After a cluster is created, the type of network cannot be changed.
Category | Tunnel Network | VPC Network | Cloud Native Network 2.0 |
|---|---|---|---|
Application scenarios |
|
|
|
Core technology | OVS | IPvlan and VPC route | VPC network interfaces and supplementary network interfaces |
Applicable cluster | CCE standard cluster | CCE standard cluster | CCE Turbo cluster |
Pod network isolation | Kubernetes-native network policies for pods | Not supported | Security group-based isolation for pods |
Interconnecting pods to a load balancer | Interconnected through a NodePort | Interconnected through a NodePort | Directly interconnected using a dedicated load balancer Interconnected using a shared load balancer through a NodePort |
Pod IP address management |
|
|
|
Maximum number of pods on a node | It is determined by the maximum number of pods that can be created on a node (the kubelet parameter maxPods). For details, see Maximum Number of Pods on a Node. | It is determined by the smaller value between the following options:
| It is determined by the smaller value between the following options:
|
Network performance | There is certain performance loss due to VXLAN encapsulation. | The performance is so good that it is comparable to that of the host network because there is no tunnel encapsulation, and cross-node traffic is forwarded through VPC routers. But there is a loss caused by NAT. | There is no performance loss because the pod network is integrated with the VPC network. |
Network scale | A maximum of 2,000 nodes is supported. | It is suitable for small- and medium-scale networks due to the limitation on VPC route tables. It is recommended that the number of nodes in a cluster be less than or equal to 1,000. Each time a node is added to the cluster, a route is added to the VPC route tables, including the default and custom ones. So, the cluster scale is limited by the VPC route tables. Evaluate the cluster scale before creating it. For details about route tables, see Notes and Constraints. | A maximum of 2,000 nodes is supported. In a cluster that uses a Cloud Native 2.0 network, pod IP addresses are assigned from the VPC CIDR block, so the number of pods is restricted by this CIDR block. Evaluate the cluster scale limitations before creating it. |
Support for IPv4/IPv6 | Supported | Not supported | Supported |
For details, see Overview.
Cluster CIDR Blocks
There are node CIDR blocks, pod CIDR blocks, and Service CIDR blocks in CCE clusters. When planning networks, note that:
- These types of CIDR blocks cannot overlap with each other. Otherwise, a conflict will occur. All subnets, including those created from the secondary CIDR block, in the VPC where a cluster resides cannot conflict with the pod CIDR block or Service CIDR block.
- There are sufficient IP addresses in each CIDR block.
- The IP addresses in a node CIDR block must match the cluster scale. Otherwise, nodes cannot be created due to insufficient IP addresses.
- The IP addresses in a pod CIDR block must match the service scale. Otherwise, pods cannot be created due to insufficient IP addresses.
In complex scenarios, for example, multiple clusters use the same VPC or clusters are interconnected across VPCs, determine the number of VPCs, the number of subnets, the pod CIDR blocks, and the communication modes of Service CIDR blocks. For details, see Planning CIDR Blocks for a Cluster.
Service Forwarding
kube-proxy is a key component in a Kubernetes cluster. It is responsible for load balancing and forwarding between a Service and its backend pods.
CCE supports:
- IPVS: allows higher throughput and faster forwarding. It applies to scenarios where the cluster scale is large or there are many Services.
- iptables: the traditional kube-proxy mode. It applies to the scenario where there are not so many Services or there are many short concurrent connections on clients.
If high stability is required and there are fewer than 2,000 Services, iptables is recommended. In other scenarios, IPVS is recommended.
For details, see Comparing iptables and IPVS.
Node Flavors
- They have fewer network resources. This may result in a single-point bottleneck.
- Resources may be wasted. If each pod running on a small-flavor node requests a lot of resources, the node cannot run multiple pods and there may be idle resources on it.
- They have higher network bandwidth. This ensures higher resource utilization for high-bandwidth applications.
- Multiple pods can run on the same node. This ensures low network latency between pods.
- You can pull images faster because after you pull an image, you can use it for multiple pods on the same node. You cannot do this on small-flavor ECSs. On such ECSs, you need to pull multiple images. You will have to spend more time on this especially during node scaling.
Additionally, select a proper vCPU/memory ratio based on your requirements. For example, if a service container with large memory but fewer CPUs is used, configure a node flavor with a vCPU/memory ratio of 1:4 to reduce resource waste.
In a CCE Turbo cluster, pods use network interfaces or supplementary network interfaces. The maximum number of pods that can be created on a node is related to the number of network interfaces that can be used on the node. So, you need to evaluate the number of network interfaces on a node. This determines the maximum number of pods that can run on the node. For details, see Maximum Number of Pods That Can Be Created on a Node. For details about the number of network interfaces supported by different node flavors, see Node Specifications.
Node Container Runtimes
CCE supports the containerd and Docker container runtimes. containerd is recommended for its shorter traces, fewer components, higher stability, and reduced consumption of node resources. As of Kubernetes 1.24, Dockershim has been removed and Docker is no longer supported as a container runtime. For details, see Kubernetes is Moving on From Dockershim: Commitments and Next Steps. CCE clusters of v1.27 do not support Docker.
Use containerd in typical scenarios. The Docker container runtimes are supported only in the following scenarios:
- Docker in Docker (usually in CI scenarios)
- Running the Docker commands on the nodes
- Calling Docker APIs
Node OSs
Service container runtimes share the kernel and underlying calls of nodes. To ensure compatibility, select a Linux distribution version that is the same as or close to that of the final service container image for a node OS.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot
