Suggestions on CCE Cluster Selection
When you use CCE to create a Kubernetes cluster, there are multiple configuration options and terms. This section compares the key configurations for CCE clusters and provides recommendations to help you create a cluster that better suits your needs.
Cluster Types
CCE supports multiple cluster types for your service requirements. The differences between the cluster types are listed in the table below.
Cluster Type |
CCE Standard |
CCE Turbo |
---|---|---|
Positioning |
Standard clusters that provide highly reliable and secure containers for commercial use |
Next-generation clusters designed for Cloud Native 2.0, with accelerated compute, networking, and scheduling |
Application scenario |
For users who expect to use container clusters to manage applications, obtain elastic compute resources, and enable simplified management on compute, network, and storage resources |
For users who have higher requirements on performance, resource utilization, and full-scenario coverage |
Network model |
Cloud native 1.0 networks: for scenarios where requirements on performance are not high and there are not so many containers
|
Cloud Native 2.0 networks: for scenarios where there are many containers and need high performance A maximum of 2000 nodes is supported. |
hostPort |
Supported |
Not supported |
Network performance |
The container network is overlaid with the VPC network, causing certain performance loss. |
The VPC network and container network are flattened into one for zero performance loss. |
Network isolation |
|
Pods can be associated with security groups for isolation. This isolation policy, based on security groups, ensures consistent security isolation both within and outside a cluster. |
Container resource isolation |
cgroups are used to isolate common containers. |
|
Edge infrastructure management |
Not supported |
Management of CloudPond edge sites |
Cluster Versions
Due to the fast iteration, many bugs are fixed and new features are added in the new Kubernetes versions. The old versions will be gradually eliminated. When creating a cluster, select the latest commercial version supported by CCE.
Network Models
This section describes the network models supported by CCE clusters. You can select one model based on your requirements.

After a cluster is created, the network model cannot be changed.
Dimension |
Tunnel Network |
VPC Network |
Cloud Native 2.0 Network |
---|---|---|---|
Application scenario |
|
|
|
Core technology |
OVS |
IPvlan and VPC route |
VPC network interfaces/supplementary network interfaces |
Applicable cluster |
CCE standard cluster |
CCE standard cluster |
CCE Turbo cluster |
Container network isolation |
Pods support Kubernetes-native network policies. |
Not supported |
Pods support security group isolation. |
Interconnecting pods to a load balancer |
Interconnected through a NodePort |
Interconnected through a NodePort |
Directly interconnected using a dedicated load balancer Interconnected using a shared load balancer through a NodePort |
Container IP address management |
|
|
|
Maximum number of pods on a node |
It is determined by the maximum number of pods that can be created on a node (the kubelet parameter maxPods). For details, see Maximum Number of Pods on a Node. |
The smaller value between the following two options is used:
|
The smaller value between the following two options is used:
|
Network performance |
Performance loss due to VXLAN encapsulation |
No tunnel encapsulation, and cross-node traffic forwarded through VPC routers (The performance is so good that is comparable to that of the host network, but there is a loss caused by NAT.) |
Container network integrated with VPC network, eliminating performance loss |
Network scale |
A maximum of 2000 nodes is supported. |
Suitable for small- and medium-scale networks due to the limitation on VPC route tables. It is recommended that the number of nodes be less than or equal to 1000. Each time a node is added to the cluster, a route is added to the VPC route tables. Evaluate the cluster scale that is limited by the VPC route tables before creating the cluster. |
A maximum of 2000 nodes is supported. In a cluster that uses a Cloud Native 2.0 network, container IP addresses are assigned from the VPC CIDR block, and the number of containers is also restricted by this CIDR block. Evaluate the cluster's scale limitations before creating it. |
IPv4/IPv6 dual-stack |
Supported |
Not supported |
Supported |
Cluster CIDR Blocks
There are node CIDR blocks, container CIDR blocks, and Service CIDR blocks in CCE clusters. When planning network addresses, note that:
- These three types of CIDR blocks cannot overlap with each other. Otherwise, a conflict will occur. All subnets (including those created from the secondary CIDR block) in the VPC where the cluster resides cannot conflict with the container and Service CIDR blocks.
- There are sufficient IP addresses in each CIDR block.
- The IP addresses in a node CIDR block must match the cluster scale. Otherwise, nodes cannot be created due to insufficient IP addresses.
- The IP addresses in a container CIDR block must match the service scale. Otherwise, pods cannot be created due to insufficient IP addresses.
In complex scenarios, for example, multiple clusters use the same VPC or clusters are interconnected across VPCs, determine the number of VPCs, the number of subnets, the container CIDR blocks, and the communication modes of the Service CIDR blocks. For details, see Planning CIDR Blocks for a Cluster.
Service Forwarding Modes
kube-proxy is a key component of a Kubernetes cluster. It is responsible for load balancing and forwarding between a Service and its backend pod.
CCE supports the iptables and IPVS forwarding modes.
- IPVS allows higher throughput and faster forwarding. It applies to scenarios where the cluster scale is large or the number of Services is large.
- iptables is the traditional kube-proxy mode. This mode applies to the scenario where the number of Services is small or there are a large number of short concurrent connections on the client.
If high stability is required and the number of Services is less than 2000, the iptables forwarding mode is recommended. In other scenarios, the IPVS forwarding mode is recommended.
Node Specifications
- The upper limit of network resources is low, which may result in a single-point bottleneck.
- Resources may be wasted. If each container running on a low-specification node needs a lot of resources, the node cannot run multiple containers and there may be idle resources in it.
- The upper limit of the network bandwidth is high. This ensures higher resource utilization for high-bandwidth applications.
- Multiple containers can run on the same node, and the network latency between containers is low.
- The efficiency of pulling images is higher. This is because an image can be used by multiple containers on a node after being pulled once. Low-specifications ECSs cannot respond promptly because the images are pulled many times and it takes more time to scale these nodes.
Additionally, select a proper vCPU/memory ratio based on your requirements. For example, if a service container with large memory but fewer CPUs is used, configure the specifications with the vCPU/memory ratio of 1:4 for the node where the container resides to reduce resource waste.
In a CCE Turbo cluster, pods use elastic or supplementary network interfaces on nodes. The maximum number of pods that can be created on a node is related to the number of network interfaces that can be used by the node. So you need to evaluate the number of network interfaces that can be used by the node. This determines the maximum number of pods that can run on the node. For details, see Maximum Number of Pods That Can Be Created on a Node.
Container Engines
CCE supports the containerd and Docker container engines. containerd is recommended for its shorter traces, fewer components, higher stability, and less consumption of node resources. Since Kubernetes 1.24, Dockershim is removed and Docker is no longer supported by default. For details, see Kubernetes is Moving on From Dockershim: Commitments and Next Steps. CCE clusters of do not support the Docker container engine.
Use containerd in typical scenarios. The Docker container engine is supported only in the following scenarios:
- Docker in Docker (usually in CI scenarios)
- Running the Docker commands on the nodes
- Calling Docker APIs
Node OS
Service container runtimes share the kernel and underlying calls of nodes. To ensure compatibility, select a Linux distribution version that is the same as or close to that of the final service container image for the node OS.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.