Buying a CCE Turbo Cluster
CCE Turbo clusters run on a cloud native infrastructure that features software-hardware synergy to support passthrough networking, high security and reliability, and intelligent scheduling.
CCE Turbo clusters are paired with the Cloud Native Network 2.0 model for large-scale, high-performance container deployment. Containers are assigned IP addresses from the VPC CIDR block. Containers and nodes can belong to different subnets. Access requests from external networks in a VPC can be directly routed to container IP addresses, which greatly improves networking performance. It is recommended that you go through Cloud Native Network 2.0 to understand the features and network planning of each CIDR block of Cloud Native Network 2.0.
Notes and Constraints
- During the node creation, software packages are downloaded from OBS using the domain name. You need to use a private DNS server to resolve the OBS domain name, and configure the subnet where the node resides with a private DNS server address. When you create a subnet, the private DNS server is used by default. If you change the subnet DNS, ensure that the DNS server in use can resolve the OBS domain name.
- You can create a maximum of 50 clusters in a single region. If more clusters are required, you can click here to increase your quota. For details about the quota, see Quotas.
- CCE Turbo clusters support only Cloud Native Network 2.0. For details about this network model, see Cloud Native Network 2.0.
- Nodes in a CCE Turbo cluster must be the models developed on the QingTian architecture that features software-hardware synergy.
- For the BMS nodes added to a CCE Turbo cluster of v1.19 from the shared resource pool, the default ENI configuration of the container is four queues. For details, see Configuring NIC Multi-Queue for BMS Nodes in the CCE Turbo Shared Resource Pool.
Procedure
- Log in to the CCE console. In the navigation pane, choose Resource Management > Clusters. Click Buy next to CCE Turbo Cluster.
Figure 1 Buying a CCE Turbo cluster
- On the page displayed, set the following parameters:
Basic configuration
Specify the basic cluster configuration.Table 1 Basic parameters for creating a cluster Parameter
Description
Cluster Name
Name of the cluster to be created. The cluster name must be unique under the same account and cannot be changed after the cluster is created.
A cluster name contains 4 to 128 characters, starting with a letter and not ending with a hyphen (-). Only lowercase letters, digits, and hyphens (-) are allowed.
Version
Version of Kubernetes to use for the cluster.
Management Scale
Maximum number of worker nodes that can be managed by the master nodes of the cluster. You can select 200 nodes, 1,000 nodes, or 2,000 nodes for your cluster. To create a cluster with 5,000 nodes, submit a service ticket.
Master node specifications change with the cluster management scale you choose, and you will be charged accordingly.
Networking configuration
Select the CIDR blocks used by nodes and containers in the cluster. If IP resources in the CIDR blocks are insufficient, nodes and containers cannot be created.Table 2 Networking parameters Parameter
Description
Network Model
Cloud Native Network 2.0: This network model deeply integrates the native elastic network interfaces (ENIs) of VPC, uses the VPC CIDR block to allocate container addresses, and supports direct traffic distribution to containers through a load balancer to deliver high performance.
For more information, see Cloud Native Network 2.0.
VPC
Select the VPC used by nodes and containers in the cluster. The VPC cannot be changed after the cluster is created.
A VPC provides a secure and logically isolated network environment.
If no VPC is available, create one on the VPC consoleVPC console. After the VPC is created, click the refresh icon. For details, see Creating a VPC.
Node Subnet
This parameter is available after you select a VPC.
The subnet you select is used by nodes in the cluster and determines the maximum number of nodes in the cluster. This subnet will be the default subnet where your nodes are created. When creating a node, you can select other subnets in the same VPC.
A node subnet provides dedicated network resources that are logically isolated from other networks for higher security.
If no node subnet is available, click Create Subnet to create a subnet. After the subnet is created, click the refresh icon. For details about the relationship between VPCs, subnets, and clusters, see Cluster Overview.
During the node creation, software packages are downloaded from OBS using the domain name. You need to use a private DNS server to resolve the OBS domain name, and configure the subnet where the node resides with a private DNS server address. When you create a subnet, the private DNS server is used by default. If you change the subnet DNS, ensure that the DNS server in use can resolve the OBS domain name.
The selected subnet cannot be changed after the cluster is created.
Pod Subnet
This parameter is available after you select a VPC.
The subnet you select is used by pods in the cluster and determines the maximum number of pods in the cluster. The subnet cannot be changed after the cluster is created.
IP addresses used by pods will be allocated from this subnet.
NOTE:If the pod subnet is the same as the node subnet, pods and nodes share the remaining IP addresses in the subnet. As a result, pods or nodes may fail to be created due to insufficient IP addresses.
Advanced Settings
Configure enhanced capabilities for your CCE Turbo cluster.Table 3 Networking parameters Parameter
Description
Service Network Segment
An IP range from which IP addresses are allocated to Kubernetes Services. After the cluster is created, the CIDR block cannot be changed. The Service CIDR block cannot conflict with the created routes. If they conflict, select another CIDR block.
The default value is 10.247.0.0/16. You can change the CIDR block and mask according to your service requirements. The mask determines the maximum number of Service IP addresses available in the cluster.
After you set the mask, the console will provide an estimated maximum number of Services you can create in this CIDR block. For details, see Which CIDR Blocks Does CCE Support?
kube-proxy Mode
Load balancing between Services and their backend pods. The value cannot be changed after the cluster is created.
- IPVS: optimized kube-proxy mode to achieve higher throughput and faster speed, ideal for large-sized clusters. This mode supports incremental updates and can keep connections uninterrupted during Service updates.
In this mode, when the ingress and Service use the same ELB instance, the ingress cannot be accessed from the nodes and containers in the cluster.
- iptables: Use iptables rules to implement Service load balancing. In this mode, too many iptables rules will be generated when many Services are deployed. In addition, non-incremental updates will cause a latency and even tangible performance issues in the case of service traffic spikes.
NOTE:- IPVS provides better scalability and performance for large clusters.
- Compared with iptables, IPVS supports more complex load balancing algorithms such as least load first (LLF) and weighted least connections (WLC).
- IPVS supports server health check and connection retries.
CPU Policy
- On: Exclusive CPU cores can be allocated to workload pods. Select On if your workload is sensitive to latency in CPU cache and scheduling.
- Off: Exclusive CPU cores will not be allocated to workload pods. Select Off if you want a large pool of shareable CPU cores.
Billing
- Yearly/Monthly: a prepaid billing mode suitable in scenarios where you have a good idea of what resources you will need during the billing period. Fees need to be paid in advance, but services will be less expensive. For a yearly/monthly-billed cluster, set the required duration. Yearly/monthly-billed clusters cannot be deleted after creation. To stop using these clusters, go to the Billing Center and unsubscribe them.
- Pay-per-use: a postpaid billing mode based on resource usage and duration. You can provision or delete resources at any time.
- IPVS: optimized kube-proxy mode to achieve higher throughput and faster speed, ideal for large-sized clusters. This mode supports incremental updates and can keep connections uninterrupted during Service updates.
- Click Next: Confirm to review the configurations and change them if required.
Parameter
Description
Enterprise project
This parameter is displayed only for enterprise users who have enabled the enterprise project function.
After an enterprise project (for example, default) is selected, the cluster, nodes in the cluster, cluster security groups, node security groups, and elastic IPs (EIPs) of the automatically created nodes will be created in this enterprise project. After a cluster is created, you are advised not to modify the enterprise projects of nodes, cluster security groups, and node security groups in the cluster.
Enterprise projects facilitate project-level management and grouping of cloud resources and users. For more information, see Enterprise Management.
Billing Mode
You can change the cluster billing mode if required.
- Yearly/Monthly: a prepaid billing mode suitable in scenarios where you have a good idea of what resources you will need during the billing period. Fees need to be paid in advance, but services will be less expensive. Yearly/monthly-billed clusters cannot be deleted after creation. To stop using these clusters, go to the Billing Center and unsubscribe them.
- Pay-per-use: a postpaid billing mode based on resource usage and duration. You can provision or delete resources at any time.
Validity Period
For a yearly/monthly-billed cluster, set the required duration.
Auto Renewal: Your service will automatically renew on a monthly or yearly basis.
- Click Submit.
It takes about 6 to 10 minutes to create a cluster. You can click Back to Cluster List to perform other operations on the cluster or click Go to Cluster Events to view the cluster details.
- If the cluster status is Available, the CCE Turbo cluster is successfully created, and Turbo is displayed next to the cluster name.

Related Operations
- Using kubectl to connect to the cluster: Connecting to a Cluster Using kubectl
- Logging in to the node: Logging In to a Node
- Creating a namespace: You can create multiple namespaces in a cluster and organize resources in the cluster into different namespaces. These namespaces serve as logical groups and can be managed separately. For details about how to create a namespace for a cluster, see Namespaces.
- Creating a workload: Once the cluster is created, you can use an image to create an application that can be accessed from public networks. For details, see Creating a Deployment, Creating a StatefulSet, or Creating a DaemonSet.
- Viewing cluster details: Click the cluster name to view cluster details.
Table 4 Details about the created cluster Tab
Description
Basic Information
You can view the details and running status of the cluster.
Monitoring
You can view the CPU and memory allocation rates of all nodes in the cluster (that is, the maximum allocated amount), as well as the CPU usage, memory usage, and specifications of the master node(s).
Events
- View cluster events.
- Set search criteria, such as the event name or the time segment during which an event is generated, to filter events.
Auto Scaling
You can configure auto scaling to add or reduce worker nodes in a cluster to meet service requirements. For details, see Setting Cluster Auto Scaling.
Clusters of v1.17 do not support auto scaling using AOM. You can use node pools for auto scaling. For details, see Node Pool Overview.
kubectl
To access a Kubernetes cluster from a PC, you need to use the Kubernetes command line tool kubectl. For details, see Connecting to a Cluster Using kubectl.
Resource Tags
Resource tags can be added to classify resources.
You can create predefined tags in Tag Management Service (TMS). Predefined tags are visible to all service resources that support the tagging function. You can use predefined tags to improve tag creation and resource migration efficiency. For details, see Creating Predefined Tags.
CCE will automatically create the "CCE-Dynamic-Provisioning-Node=Node ID" tag. A maximum of 5 tags can be added.
Istioctl
After the Istio service mesh function is enabled for a cluster, you can use Istio command line tool Istioctl to configure routing policies to manage service traffic. These policies include traffic shifting, fault injection, rate limiting, and circuit breaker. For details, see Enabling Istio.
Last Article: CCE Turbo Clusters and CCE Clusters
Next Article: Buying a CCE Cluster
Did this article solve your problem?
Thank you for your score!Your feedback would help us improve the website.