Updated on 2025-12-08 GMT+08:00

Buying a CCE Standard Cluster

CCE standard clusters provide enterprise-class Kubernetes cluster hosting service that supports full lifecycle management of containerized applications. They offer a highly scalable, high-performance solution for deploying and managing cloud native applications. On the CCE console, you can easily create CCE standard and Turbo clusters. After a cluster is created, CCE hosts the master nodes. You only need to create worker nodes. In this way, you can implement cost-effective O&M and efficient service deployment.

Before purchasing a CCE standard cluster, you are advised to learn about What Is CCE? Networking Overview, and Planning CIDR Blocks for a Cluster.

Step 1: Configure Basic Settings

Basic settings define the core architecture and underlying resource rules of a cluster, providing a framework for cluster running and resource allocation.

  1. Log in to the CCE console. In the upper left corner of the page, click and select a region for your cluster. The closer the selected region is to the region where resources are deployed, the lower the network latency and the faster the access.

    After confirming the region, click Buy Cluster. If you use CCE for the first time, you need to create an agency following instructions.

  2. Configure the basic settings of the cluster. For details, see Table 1.

    Table 1 Basic settings of a cluster (applicable to standard clusters)

    Parameter

    Description

    Modifiable After Cluster Creation

    Billing Mode

    Select a billing mode as required.
    • Pay-per-use: a postpaid billing mode. It is suitable for scenarios where resources will be billed based on usage frequency and duration. You can provision or delete resources at any time.

    Yes

    Cluster Name

    Enter a cluster name. Cluster names under the same account must be unique.

    Enter 4 to 128 characters. Start with a lowercase letter and do not end with a hyphen (-). Only lowercase letters, digits, and hyphens (-) are allowed.

    Yes

    Enterprise Project

    This parameter is available only for enterprise users who have enabled an enterprise project.

    After an enterprise project is selected, clusters and their security groups will be created in that project. To manage clusters and other resources like nodes, load balancers, and node security groups, you can use the Enterprise Project Management Service (EPS).

    Yes

    Cluster Version

    Select a Kubernetes version. The latest commercial version is recommended, and it provides you with more stable, reliable features.

    Yes

    Cluster Scale

    Select a cluster scale as required. This parameter controls the maximum number of worker nodes that a cluster can manage.

    Yes

    The cluster that has been created can only be scaled out. For details, see Changing a Cluster Scale.

    Master Nodes

    Select the number of master nodes. The master nodes are automatically hosted by CCE and deployed with Kubernetes cluster management components such as kube-apiserver, kube-controller-manager, and kube-scheduler.

    • 3 Masters: Three master nodes will be created for high cluster availability.
    • Single: Only one master node will be created in your cluster.
      NOTE:

      If more than half of the master nodes in a CCE cluster are faulty, the cluster cannot function properly.

    You can also select AZs for deploying the master nodes of a specific cluster. By default, AZs are allocated automatically for the master nodes.
    • Automatic: Master nodes are randomly distributed in different AZs for cluster DR. If there are not enough AZs available, CCE will prioritize assigning nodes in AZs with enough resources to ensure cluster creation. However, this may result in AZ-level DR not being guaranteed.
    • Custom: Master nodes are deployed in specific AZs.
      If there is one master node in a cluster, you can select one AZ for the master node. If there are multiple master nodes in a cluster, you can select multiple AZs for the master nodes.
      • AZ: Master nodes are deployed in different AZs for cluster DR.
      • Host: Master nodes are deployed on different hosts in the same AZ for cluster DR.
      • Custom: Master nodes are deployed in the AZs you specified.

    No

    After the cluster is created, the number of master nodes and the AZs where they are deployed cannot be changed.

Step 2: Configure Network Settings

Network configuration follows a hierarchical management system. It ensures end-to-end network connectivity and security assurance for containerized applications through collaborative configuration of the cluster networks, container networks, and Service networks.

  • Cluster network: handles communication between nodes, transmitting pod and Service traffic while ensuring cluster infrastructure connectivity and security.
  • Container network: assigns each pod an independent IP address, enabling direct container communication and cross-node communication.
  • Service network: establishes a stable access entry, supports load balancing, and optimizes traffic management for Services within a cluster.

Before configuring the network settings, you are advised to learn the concepts and relationships of the three types of networks. For details, see Networking Overview.

Configuring Network Settings for a CCE Standard Cluster

  1. Configure cluster network settings. For details, see Table 2.

    Table 2 Cluster network settings

    Parameter

    Description

    Modifiable After Cluster Creation

    VPC

    Select a VPC for a cluster.

    If no VPC is available, click Create VPC to create one. After the VPC is created, click the refresh icon.

    No

    Default Node Subnet

    Select a subnet. Once selected, all nodes in the cluster will automatically use the IP addresses assigned within that subnet. However, during node or node pool creation, the subnet settings can be reconfigured.

    No

    Default Node Security Group

    Select the security group automatically generated by CCE or select an existing one.

    The default node security group must allow traffic from certain ports to ensure normal communication. Otherwise, the node cannot be created.

    Yes

    IPv6

    After this function is enabled, the cluster supports the IPv4/IPv6 dual-stack, meaning each worker node can have both an IPv4 and IPv6 address. Both IP addresses support private and public network access. Before enabling this function, ensure that Default Node Subnet includes an IPv6 CIDR block.

    • CCE standard clusters (using VPC networks): IPv6 is not supported.

    No

  2. Configure container network parameters. For details, see Table 3.

    Table 3 Container network settings

    Parameter

    Description

    Modifiable After Cluster Creation

    Network Model

    The network model used by the container network in a cluster.
    • Tunnel network: applies to large clusters (with up to 2000 nodes) and scenarios that do not demand high performance such as web applications and data middle- and back-end services with low access traffic.
    • VPC network: applies to small clusters (with 1000 nodes or fewer) and scenarios that demand high performance such as AI and big data computing.

    For more details about their differences, see Overview.

    No

    Network Policies (supported by clusters using the tunnel networks)

    Policy-based network control for a cluster. For details, see Configuring Network Policies to Restrict Pod Access.

    After this function is enabled, if the CIDR blocks of a customer's service conflict with the on-premises CIDR blocks, the link to a newly added gateway may not be established.

    For example, if a cluster uses a Direct Connect connection to access an external address, the external switch does not support ip-option. Enabling network policies in this scenario could result in network access failure.

    Yes

    Container CIDR Block

    CIDR block used by containers. This parameter determines the maximum number of containers in the cluster. CCE standard clusters support:

    • Manually set: You can customize the container CIDR blocks as needed. For cross-VPC passthrough networking, make sure the container CIDR block does not overlap with the VPC CIDR block to be accessed to prevent conflicts. For details, see Planning CIDR Blocks for a Cluster. The VPC network model allows you to configure multiple CIDR blocks, and container CIDR blocks can be added even after the cluster is created. For details, see Expanding the Container CIDR Block of a Cluster That Uses a VPC Network.
    • Auto select: CCE will randomly allocate a non-conflicting CIDR block from the ranges 172.16.0.0/16 to 172.31.0.0/16, or from 10.0.0.0/12, 10.16.0.0/12, 10.32.0.0/12, 10.48.0.0/12, 10.64.0.0/12, 10.80.0.0/12, 10.96.0.0/12, and 10.112.0.0/12. Since the allocated CIDR block cannot be modified after the cluster is created, you are advised to manually configure the CIDR blocks, especially in commercial scenarios.
      NOTE:

      After a cluster using a container tunnel network is created, the container CIDR block cannot be expanded later. To prevent IP address exhaustion, it is advised to set the container CIDR block with a maximum mask length of 19 bits.

    No

    After a cluster using a VPC network is created, you can add container CIDR blocks to the cluster but cannot modify or delete the existing ones.

    Pod IP Addresses Reserved for Each Node (supported by clusters using the VPC networks)

    The number of pod IP addresses that can be allocated on each node (alpha.cce/fixPoolMask). This parameter determines the maximum number of pods that can be created on each node.

    In a container network, each pod is assigned a unique IP address. If the number of pod IP addresses reserved for each node is insufficient, pods cannot be created. For details, see Number of Allocatable Pod IP Addresses on a Node.

    No

  3. Configure Service network parameters. For details, see Table 4.

    Table 4 Service network settings

    Parameter

    Description

    Modifiable After Cluster Creation

    Service CIDR Block

    Configure an IP address range for the ClusterIP Services in a cluster. This parameter controls the maximum number of ClusterIP Services in a cluster. ClusterIP Services enable communication between containers in a cluster. The Service CIDR block cannot overlap with the node subnet or container CIDR block.

    No

    Request Forwarding

    Configure load balancing and route forwarding of Service traffic in a cluster. IPVS and iptables are supported. For details, see Comparing iptables and IPVS.

    • iptables: the traditional kube-proxy mode. It applies to the scenario where the number of Services is small or a large number of short connections are concurrently sent on the client. IPv6 clusters do not support iptables.
    • IPVS: allows higher throughput and faster forwarding. It is suitable for large clusters or when there are a large number of Services.

    No

(Optional) Step 3: Configure Advanced Settings

Advanced settings extend and strengthen previous settings, enhancing security, stability, and compliance within clusters. This is achieved through capabilities like improved authentication, resource management, and security mechanisms.

Table 5 Advanced settings

Parameter

Description

Modifiable After Cluster Creation

IAM Authentication

CCE clusters support IAM authentication. You can call IAM authenticated APIs to access CCE clusters.

No

Certificate Authentication

Certificate authentication is used for identity authentication and access control. It ensures that only authorized users or services can access specific cluster.

  • Automatically generated: CCE automatically creates and hosts X.509 certificates for your clusters. It automatically maintains and rotates cluster certificates.
  • Bring your own: You can add a custom certificate to your cluster and use this certificate for authentication. In this case, you need to upload CA root certificate, client certificate, and client certificate private key.
    CAUTION:
    • Upload a file smaller than 1 MB. The CA certificate and client certificate can be in .crt or .cer format. The private key of the client certificate can only be uploaded unencrypted.
    • The validity period of the client certificate must be longer than five years.
    • The uploaded CA root certificate is used by the authentication proxy and for configuring the kube-apiserver aggregation layer. If any of the uploaded certificates is invalid, the cluster cannot be created.
    • In clusters of v1.25 and later, Kubernetes no longer supports certificate authentication generated using the SHA1WithRSA or ECDSAWithSHA1 algorithms. You are advised to use the certificates generated using the SHA-256 algorithm for authentication.

No

CPU Management

CPU management policies allow precise control over CPU allocation for pods. For details, see CPU Policy.

  • Disabled: The default CPU affinity policy is used. Affinity policies other than the default behavior of the OS scheduler are not provided. While many CPUs remain available in the shared pool, workloads cannot exclusively use any of them.
  • Enabled: Workload pods can exclusively use CPUs. If a pod with a QoS class of Guaranteed requests an integer number of CPUs, the containers within the pod are pinned to physical CPUs on the host node. This mode benefits workloads sensitive to CPU cache hit ratio and scheduling latency.

Yes

Overload Control

After this function is enabled, concurrent requests will be dynamically controlled based on the resource demands received by master nodes, ensuring stable running of the master nodes and the cluster. For details, see Enabling Overload Control for a Cluster.

Yes

Cluster Deletion Protection

After this function is enabled, you will not be able to delete or unsubscribe from clusters on CCE. This option is a measure to prevent accidental deletion of clusters through the console or APIs. You can modify the function status in Settings after creating it.

Yes

Time Zone

The cluster's scheduled tasks and nodes are subject to the chosen time zone.

×

Resource Tag

Adding tags to resources allows for customized classification and organization. A maximum of 20 resource tags can be added.

You can create predefined tags on the TMS console. These tags are available to all resources that support tags. You can use these tags to improve the tag creation and resource migration efficiency.

  • A tag key can have a maximum of 128 characters, including letters, digits, spaces, and special characters (-_.:=+@). It cannot start or end with a space, or start with _sys_. The key cannot be empty.
  • A tag value can have a maximum of 255 characters. It can only contain letters, digits, spaces, and special characters (-_.:/=+@). The value can be empty.

Yes

Description

Cluster description helps users and administrators quickly understand the basic settings, status, and usage of a cluster. The description can contain a maximum of 200 characters.

Yes

Step 4: Select Add-ons

CCE provides a variety of add-ons to extend cluster functions and enhance the functionality and flexibility of containerized applications. You can select add-ons as required. Some basic add-ons are set as mandatory by default. If non-basic add-ons are not installed during cluster creation, they can still be added later on the Add-ons page after the cluster is created.

  1. Click Next: Select Add-on. On the page displayed, select the add-ons to be installed during cluster creation.
  2. Select basic add-ons to ensure the proper running of the cluster. For details, see Table 6.

    Table 6 Basic add-ons

    Add-on

    Description

    CCE Container Network (Yangtse CNI)

    This is the basic cluster add-on. It provides network connectivity, Internet access, and security isolation for pods in your cluster.

    CCE Container Storage (Everest)

    This add-on (CCE Container Storage (Everest)) is installed by default. It is a cloud native container storage system based on CSI and supports cloud storage services such as EVS.

    CoreDNS

    This add-on (CoreDNS) is installed by default. It provides DNS resolution for your cluster and can be used to access the in-cloud DNS server.

    NodeLocal DNSCache

    (Optional) After you select this option, CCE will automatically install NodeLocal DNSCache. NodeLocal DNSCache improves cluster DNS performance by running a DNS cache proxy on cluster nodes.

  3. Select the observability add-ons to experience the full observability function. For details, see Table 6.

    Table 7 Observability add-ons

    Add-on

    Description

    Cloud Native Cluster Monitoring

    (Optional) After you select this option, CCE will automatically install Cloud Native Cluster Monitoring. Cloud Native Cluster Monitoring collects monitoring metrics for your cluster and reports the metrics to AOM. The agent mode does not support HPA based on custom Prometheus statements. If related functions are required, install this add-on manually after the cluster is created.

    CCE Node Problem Detector

    (Optional) After you select this option, CCE will automatically install CCE Node Problem Detector to detect faults and isolate nodes for prompt cluster troubleshooting.

Step 5: Configure Add-ons

Configure the selected add-ons to ensure they operate stably and accurately and meet service requirements.

  1. Click Next: Configure Add-on.
  2. Configure the basic add-ons. For details, see Table 8.

    Table 8 Basic add-on settings

    Add-on

    Description

    CCE Container Network (Yangtse CNI)

    This add-on is unconfigurable.

    CCE Container Storage (Everest)

    This add-on is unconfigurable. After the cluster is created, you can go to Add-ons to modify the settings.

    CoreDNS

    This add-on is unconfigurable. After the cluster is created, you can go to Add-ons to modify the settings.

    NodeLocal DNSCache

    This add-on is unconfigurable. After the cluster is created, you can go to Add-ons to modify the settings.

  3. Configure the observability add-ons. For details, see Table 9.

    Table 9 Observability add-on settings

    Add-on

    Description

    Cloud Native Cluster Monitoring

    Select an AOM instance for Cloud Native Cluster Monitoring to report metrics. If no AOM instance is available, click Creating Instance to create one.

    CCE Node Problem Detector

    This add-on is unconfigurable. After the cluster is created, you can go to Add-ons to modify the settings.

Step 6: Confirm Settings

Click Next: Confirm Settings. The cluster resource list is displayed. Confirm the information and click Submit.

It takes about 5 to 10 minutes to create a cluster. You can click Back to Cluster List to perform other operations on the cluster or click Go to Cluster Events to view the cluster details.

Helpful Links