Updated on 2023-11-15 GMT+08:00

Buying a CCE Cluster

On the CCE console, you can easily create Kubernetes clusters. Kubernetes can manage container clusters at scale. A cluster manages a group of node resources.

In CCE, you can create a CCE cluster to manage VMs as nodes. By using high-performance network models, hybrid clusters provide a multi-scenario, secure, and stable runtime environment for containers.

Notes and Constraints

  • During the node creation, software packages are downloaded from OBS using the domain name. You need to use a private DNS server to resolve the OBS domain name, and configure the subnet where the node resides with a private DNS server address. When you create a subnet, the private DNS server is used by default. If you change the subnet DNS, ensure that the DNS server in use can resolve the OBS domain name.
  • You can create a maximum of 50 clusters in a single region.
  • After a cluster is created, the following items cannot be changed:
    • Number of master nodes in the cluster.
    • AZ of a master node.
    • Network configuration of the cluster, such as the VPC, subnet, container CIDR block, Service CIDR block, and kube-proxy (forwarding) settings.
    • Network model. For example, change the tunnel network to the VPC network.

For more information, see Notes and Constraints.

Procedure

  1. Log in to the CCE console. On the Dashboard page, click Buy Cluster. Alternatively, choose Resource Management > Clusters in the navigation pane and click Buy next to CCE Cluster.
  2. Set cluster parameters by referring to Table 1. Pay attention to the parameters marked with an asterisk (*).

    Table 1 Parameters for creating a cluster

    Parameter

    Description

    Billing Mode

    • Yearly/Monthly: a prepaid billing mode suitable in scenarios where you have a good idea of what resources you will need during the billing period. Fees need to be paid in advance, but services will be less expensive. Yearly/monthly billed clusters cannot be deleted after creation. To stop using these clusters, go to the Billing Center and unsubscribe them.
    • Pay-per-use: a postpaid billing mode suitable in scenarios where resources will be billed based on usage frequency and duration. You can provision or delete resources at any time.

    This section uses the pay-per-use billing mode as an example.

    Region

    Select a region near you to ensure the lowest latency possible.

    Enterprise project

    This parameter is displayed only for enterprise users who have enabled the enterprise project function.

    After an enterprise project (for example, default) is selected, the cluster, nodes in the cluster, cluster security groups, node security groups, and elastic IPs (EIPs) of the automatically created nodes will be created in this enterprise project. After a cluster is created, you are advised not to modify the enterprise projects of nodes, cluster security groups, and node security groups in the cluster.

    An enterprise project facilitates project-level management and grouping of cloud resources and users.

    *Cluster Name

    Name of the new cluster, which cannot be changed after the cluster is created.

    A cluster name contains 4 to 128 characters starting with a letter and not ending with a hyphen (-). Only lowercase letters, digits, and hyphens (-) are allowed.

    Version

    Kubernetes community baseline version. The latest version is recommended.

    If a Beta version is available, you can use it for trial. However, it is not recommended for commercial use.

    Management Scale

    Maximum number of worker nodes that can be managed by the master nodes of the current cluster. You can select 50 nodes, 200 nodes, or 1,000 nodes for your cluster, or 2,000 nodes if you are buying a cluster of v1.15.11 or later.

    If you select 1000 nodes, the master nodes of the cluster can manage a maximum of 1000 worker nodes. The configuration fee varies depending on the specifications of master nodes for different management scales.

    Number of master nodes

    3: Three master nodes will be created to make the cluster highly available. If a master node is faulty, the cluster can still be available without affecting service functions. Click Change. In the dialog box displayed, you can configure the following parameters:

    Disaster recovery level

    • AZ: Master nodes are deployed in different AZs for disaster recovery.
    • Fault domain: Master nodes are deployed in different failure domains in the same AZ for disaster recovery. This option is displayed only when the environment supports failure domains.
    • Host computer: Master nodes are deployed on different hosts in the same AZ for disaster recovery.
    • Customize: You can select different locations to deploy different master nodes. In the fault domain mode, master nodes must be in the same AZ.

    1: Only one master node is created in the cluster, which cannot ensure SLA for the cluster. Single-master clusters (non-HA clusters) are not recommended for commercial scenarios. Click Change. In the AZ Settings dialog box, select an AZ for the master node.

    NOTE:
    • You are advised to create multiple master nodes to improve the cluster DR capability in commercial scenarios.
    • The multi-master mode cannot be changed after the cluster is created. A single-master cluster cannot be upgraded to a multi-master cluster. For a single-master cluster, if a master node is faulty, services will be affected.
    • To ensure reliability, the multi-master mode is enabled by default for a cluster with 1,000 or more nodes.

    *VPC

    VPC where the cluster is located. The value cannot be changed after the cluster is created.

    A VPC provides a secure and logically isolated network environment.

    If no VPC is available, click Create a VPC to create a VPC. After the VPC is created, click the refresh icon.

    *Subnet

    Subnet where the node VM runs. The value cannot be changed after the cluster is created.

    A subnet provides dedicated network resources that are logically isolated from other networks for network security.

    If no subnet is available, click Create Subnet to create a subnet. After the subnet is created, click the refresh icon. For details about the relationship between VPCs, subnets, and clusters, see Cluster Overview.

    During the node creation, software packages are downloaded from OBS using the domain name. You need to use a private DNS server to resolve the OBS domain name, and configure the subnet where the node resides with a private DNS server address. When you create a subnet, the private DNS server is used by default. If you change the subnet DNS, ensure that the DNS server in use can resolve the OBS domain name.

    The selected subnet cannot be changed after the cluster is created.

    Network Model

    After a cluster is created, the network model cannot be changed. Exercise caution when selecting a network model. For details about how to select a network model, see Overview.

    VPC network

    In this network model, each node occupies one VPC route. The number of VPC routes supported by the current region and the number of container IP addresses that can be allocated to each node (that is, the maximum number of pods that can be created) are displayed on the console.

    • The container network uses VPC routes to integrate with the underlying network. This network model is applicable to performance-intensive scenarios. However, each node occupies one VPC route, and the maximum number of nodes allowed in a cluster depends on the VPC route quota.
    • Each node is assigned a CIDR block of a fixed size. VPC networks are free from packet encapsulation overheads and outperform container tunnel networks. In addition, as VPC routing includes routes to node IP addresses and the container CIDR block, container pods in the cluster can be directly accessed from outside the cluster.
      NOTE:
      • In the VPC network model, extended CIDR blocks and network policies are not supported.
      • When creating multiple clusters using the VPC network model in one VPC, select a CIDR block for each cluster that does not overlap with the VPC address or other container CIDR blocks.

    Tunnel network

    • The container network is an overlay tunnel network on top of a VPC network and uses the VXLAN technology. This network model is applicable when there is no high requirements on performance.
    • VXLAN encapsulates Ethernet packets as UDP packets for tunnel transmission. Though at some cost of performance, the tunnel encapsulation enables higher interoperability and compatibility with advanced features (such as network policy-based isolation), meeting the requirements of most applications.

    Container Network Segment

    An IP address range that can be allocated to container pods. After the cluster is created, the value cannot be changed.

    • If Automatically select is deselected, enter a CIDR block manually. If the CIDR block you specify conflicts with a subnet CIDR block, the system prompts you to select another CIDR block. The recommended CIDR blocks are 10.0.0.0/8-18, 172.16.0.0/16-18, and 192.168.0.0/16-18.

      If different clusters share a container CIDR block, an IP address conflict will occur and access to applications may fail.

    • If Automatically select is selected, the system automatically assigns a CIDR block that does not conflict with any subnet CIDR block.

    The mask of the container CIDR block must be appropriate. It determines the number of available nodes in a cluster. A too small mask value will cause the cluster to soon fall short of nodes. After the mask is set, the estimated maximum number of containers supported by the current CIDR block will be displayed.

    Service Network Segment

    An IP address range that can be allocated to Kubernetes Services. After the cluster is created, the value cannot be changed. The Service CIDR block cannot conflict with the created route. If they conflict, select another CIDR block.

    • Default: The default CIDR block 10.247.0.0/16 will be used.
    • Custom: Manually set a CIDR block and mask based on service requirements. The mask determines the maximum number of Service IP addresses available in the cluster.

    Authorization Mode

    RBAC is selected by default and cannot be deselected.

    After RBAC is enabled, IAM users access resources in the cluster according to fine-grained permissions policies. For details, see Namespace Permissions (Kubernetes RBAC-based).

    Authentication Mode

    The authentication mechanism controls user permission on resources in a cluster.

    The X.509-based authentication mode is enabled by default. X.509 is a commonly used certificate format.

    If you want to perform permission control on the cluster, select Enhanced authentication. The cluster will identify users based on the header of the request for authentication.

    You need to upload your own CA certificate, client certificate, and client certificate private key (for details about how to create a certificate, see Certificates), and select I have confirmed that the uploaded certificates are valid.

    CAUTION:
    • Upload a file smaller than 1 MB. The CA certificate and client certificate can be in .crt or .cer format. The private key of the client certificate can only be uploaded unencrypted.
    • The validity period of the client certificate must be longer than five years.
    • The uploaded CA certificate is used for both the authentication proxy and the kube-apiserver aggregation layer configuration. If the certificate is invalid, the cluster cannot be created.

    Cluster Description

    Optional. Enter the description of the new container cluster.

    Advanced Settings

    Click Advanced Settings to expand the details page. The following functions are supported (unsupported functions in current AZs are hidden):

    Service Forwarding Mode

    • iptables: Traditional kube-proxy uses iptables rules to implement Service load balancing. In this mode, too many iptables rules will be generated when many Services are deployed. In addition, non-incremental updates will cause a latency and even obvious performance issues in the case of heavy service traffic.
    • ipvs: optimized kube-proxy mode to achieve higher throughput and faster speed, ideal for large-sized clusters. This mode supports incremental updates and can keep connections uninterrupted during Service updates.

      In this mode, when the ingress and Service use the same ELB instance, the ingress cannot be accessed from the nodes and containers in the cluster.

    NOTE:
    • ipvs provides better scalability and performance for large clusters.
    • Compared with iptables, ipvs supports more complex load balancing algorithms such as least load first (LLF) and weighted least connections (WLC).
    • ipvs supports server health checking and connection retries.

    CPU Policy

    This parameter is displayed only for clusters of v1.13.10-r0 and later.

    • On: Exclusive CPU cores can be allocated to workload pods. Select On if your workload is sensitive to latency in CPU cache and scheduling.
    • Off: Exclusive CPU cores will not be allocated to workload pods. Select Off if you want a large pool of shareable CPU cores.

    For details about CPU management policies, see Feature Highlight: CPU Manager.

    After CPU Policy is enabled, workloads cannot be started or created on nodes after the node specifications are changed.

    Validity Period

    For a yearly/monthly billed cluster, set the required duration.

  3. Click Next: Create Node and set the following parameters.

    • Create Node
      • Create now: Create a node when creating a cluster. Currently, only VM nodes are supported. If a node fails to be created, the cluster will be rolled back.
      • Create later: No node will be created. Only an empty cluster will be created.
    • Billing Mode: Select Yearly/Monthly or Pay-per-use.
      • Yearly/Monthly: a prepaid billing mode, in which a resource is billed based on the purchase period. This mode is more cost-effective than the pay-per-use mode and applies if the resource usage period can be estimated.
      • Pay-per-use: a postpaid billing mode suitable in scenarios where resources will be billed based on usage duration. You can provision or delete resources at any time.

      Nodes created along with the cluster must inherit the billing mode from the cluster. For example, if the billing mode of the cluster is pay-per-use, then nodes created along with the cluster must be billed on the pay-per-use basis. For details, see Buying a Node.

      Yearly/monthly billed nodes cannot be deleted after creation. To stop using these nodes, go to the Billing Center and unsubscribe them.

    • Current Region: geographic location of the nodes to be created.
    • AZ: Set this parameter based on the site requirements. An AZ is a physical region where resources use independent power supply and networks. AZs are physically isolated but interconnected through an internal network.
      You are advised to deploy worker nodes in different AZs after the cluster is created to make your workloads more reliable. When creating a cluster, you can deploy nodes only in one AZ.
      Figure 1 Worker nodes in different AZs
    • Node Type
      • VM node: A VM node will be created in the cluster.
    • Node Name: Enter a node name. A node name contains 1 to 56 characters starting with a lowercase letter and not ending with a hyphen (-). Only lowercase letters, digits, and hyphens (-) are allowed.
    • Specifications: Select the node specifications based on service requirements. The available node specifications vary depending on AZs.

      To ensure node stability, CCE automatically reserves some resources to run necessary system components. For details, see Formula for Calculating the Reserved Resources of a Node.

    • OS: Select an OS for the node to be created.
      • Public image: Select an OS for the node.

        A public image is a standard, widely used image. It contains an OS and preinstalled public applications and is available to all users.

      • Private image (OBT): A private image contains an OS or service data, preinstalled public applications, and the owner's private applications. It is available only to the user who created it. Private images are supported only for clusters of v1.15 or later.

        If no private image is available, create one by following the instructions provided in .

      Reinstalling the OS or modifying OS configurations could make the node unavailable. Exercise caution when performing these operations.

    • System Disk: Set the system disk space of the worker node. The value ranges from 40GB to 1024 GB. The default value is 40GB.

      By default, system disks support High I/O (SAS) and Ultra-high I/O (SSD) EVS disks.

    • Data Disk: Set the data disk space of the worker node. The value ranges from 100 GB to 32,768 GB. The default value is 100 GB. The EVS disk types provided for the data disk are the same as those for the system disk.

      If the data disk is uninstalled or damaged, the Docker service becomes abnormal and the node becomes unavailable. You are advised not to delete the data disk.

      • LVM: If this option is selected, CCE data disks are managed by the Logical Volume Manager (LVM). On this condition, you can adjust the disk space allocation for different resources. This option is selected for the first disk by default and cannot be unselected. You can choose to enable or disable LVM for new data disks.
        • This option is selected by default, indicating that LVM management is enabled.
        • You can deselect the check box to disable LVM management.
          • Disk space of the data disks managed by LVM will be allocated according to the ratio you set.
          • When creating a node in a cluster of v1.13.10 or later, if LVM is not selected for a data disk, follow instructions in Adding a Second Data Disk to a Node in a CCE Cluster to fill in the pre-installation script and format the data disk. Otherwise, the data disk will still be managed by LVM.
          • When creating a node in a cluster earlier than v1.13.10, you must format the data disks that are not managed by LVM. Otherwise, either these data disks or the first data disk will be managed by LVM.
      • Add Data Disk: Currently, a maximum of two data disks can be attached to a node. After the node is created, you can go to the ECS console to attach more data disks. This function is available only to clusters of certain versions.
      • Data disk space allocation: Click to specify the resource ratio for Kubernetes Space and User Space. Disk space of the data disks managed by LVM will be allocated according to the ratio you set. This function is available only to clusters of certain versions.
        • Kubernetes Space: You can specify the ratio of the data disk space for storing Docker and kubelet resources. Docker resources include the Docker working directory, Docker images, and image metadata. kubelet resources include pod configuration files, secrets, and emptyDirs.

          The Docker space cannot be less than 10%, and the space size cannot be less than 60 GB. The kubelet space cannot be less than 10%.

          The Docker space size is determined by your service requirements. For details, see Data Disk Space Allocation.

        • User Space: You can set the ratio of the disk space that is not allocated to Kubernetes resources and the path to which the user space is mounted.

          Note that the mount path cannot be /, /home/paas, /var/paas, /var/lib, /var/script, /var/log, /mnt/paas, or /opt/cloud, and cannot conflict with the system directories (such as bin, lib, home, root, boot, dev, etc, lost+found, mnt, proc, sbin, srv, tmp, var, media, opt, selinux, sys, and usr). Otherwise, the system or node installation will fail.

      • The ratio of disk space allocated to the Kubernetes space and user space must be equal to 100% in total. You can click to refresh the data after you have modified the ratio.
      • By default, disks run in the direct-lvm mode. If data disks are removed, the loop-lvm mode will be used and this will impair system stability.
    • VPC: A VPC where the current cluster is located. This parameter cannot be changed and is displayed only for clusters of v1.13.10-r0 or later.
    • Subnet: A subnet improves network security by providing exclusive network resources that are isolated from other networks. You can select any subnet in the cluster VPC. Cluster nodes can belong to different subnets.

      During the node creation, software packages are downloaded from OBS using the domain name. You need to use a private DNS server to resolve the OBS domain name, and configure the subnet where the node resides with a private DNS server address. When you create a subnet, the private DNS server is used by default. If you change the subnet DNS, ensure that the DNS server in use can resolve the OBS domain name.

      When a node is added to an existing cluster, if an extended CIDR block is added to the VPC corresponding to the subnet and the subnet is an extended CIDR block, you need to add the following three security group rules to the master node security group (the group name is in the format of Cluster name-cce-control-Random number). These rules ensure that the nodes added to the cluster are available. (This step is not required if an extended CIDR block has been added to the VPC during cluster creation.)

    • EIP: an independent public IP address. If the nodes to be created require public network access, select Automatically assign or Use existing.
      An EIP bound to the node allows public network access. EIP bandwidth can be modified at any time. An ECS without a bound EIP cannot access the Internet or be accessed by public networks.
      • Do not use: A node without an EIP cannot be accessed from public networks. It can be used only as a cloud server for deploying services or clusters on a private network.
      • Automatically assign: An EIP with specified configurations is automatically assigned to each node. If the number of EIPs is smaller than the number of nodes, the EIPs are randomly bound to the nodes.

        Configure the EIP specifications, billing factor, bandwidth type, and bandwidth size as required. When creating an ECS, ensure that the elastic IP address quota is sufficient.

      • Use existing: Existing EIPs are assigned to the nodes to be created.

      By default, VPC's SNAT feature is disabled for CCE. If SNAT is enabled, you do not need to use EIPs to access public networks. For details about SNAT, see Custom Policies.

    • Login Mode: You can use a password or key pair.
      • Password: The default username is root. Enter the password for logging in to the node and confirm the password.

        Be sure to remember the password as you will need it when you log in to the node.

      • Key pair: Select the key pair used to log in to the node. You can select a shared key.

        A key pair is used for identity authentication when you remotely log in to a node. If no key pair is available, click Create a key pair.

        When creating a node using a key pair, IAM users can select only the key pairs created by their own, regardless of whether these users are in the same group. For example, user B cannot use the key pair created by user A to create a node, and the key pair is not displayed in the drop-down list on the CCE console.

        Figure 2 Key pair
    • Advanced ECS Settings (optional): Click to show advanced ECS settings.
      • ECS Group: An ECS group logically groups ECSs. The ECSs in the same ECS group comply with the same policy associated with the ECS group.
        • Anti-affinity: ECSs in an ECS group are deployed on different physical hosts to improve service reliability.

        Select an existing ECS group, or click Create ECS Group to create one. After the ECS group is created, click the refresh button.

      • Resource Tags: By adding tags to resources, you can classify resources.

        You can create predefined tags in Tag Management Service (TMS). Predefined tags are visible to all service resources that support the tagging function. You can use predefined tags to improve tag creation and migration efficiency.

        CCE will automatically create the "CCE-Dynamic-Provisioning-Node=node id" tag. A maximum of 5 tags can be added.

      • Agency: An agency is created by a tenant administrator on the IAM console. By creating an agency, you can share your cloud server resources with another account, or entrust a more professional person or team to manage your resources. To authorize an ECS or BMS to call cloud services, select Cloud service as the agency type, click Select, and then select ECS BMS.
      • Pre-installation Script: Enter a maximum of 1,000 characters.

        The script will be executed before Kubernetes software is installed. Note that if the script is incorrect, Kubernetes software may fail to be installed. The script is usually used to format data disks.

      • Post-installation Script: Enter a maximum of 1,000 characters.

        The script will be executed after Kubernetes software is installed and will not affect the installation. The script is usually used to modify Docker parameters.

      • Subnet IP Address: Select Automatically assign IP address (recommended) or Manually assigning IP addresses.

        When you manually assign IPs, the master IP is randomly specified. Therefore, it may conflict with the worker node IP. If you prefer the manual operation, you are advised to select a subnet CIDR block different from that of the master node when setting worker node subnet.

    • Advanced Kubernetes Settings: (Optional) Click to show advanced cluster settings.
      • Max Pods: maximum number of pods that can be created on a node, including the system's default pods. If the cluster uses the VPC network model, the maximum value is determined by the number of IP addresses that can be allocated to containers on each node.

        This limit prevents the node from being overloaded by managing too many pods. For details, see Maximum Number of Pods That Can Be Created on a Node.

      • Maximum Data Space per Container: maximum data space that can be used by a container. The value ranges from 10 GB to 500 GB. If the value of this field is larger than the data disk space allocated to Docker resources, the latter will override the value specified here. Typically, 90% of the data disk space is allocated to Docker resources. This parameter is displayed only for clusters of v1.13.10-r0 and later.
    • Nodes: The value cannot exceed the management scale you select when configuring cluster parameters. Set this parameter based on service requirements and the remaining quota displayed on the page. Click to view the factors that affect the number of nodes to be added (depending on the factor with the minimum value).
    • Validity Period: If the cluster billing mode is yearly/monthly, set the number of months or years for which you will use the new node.

  4. Click Next: Install Add-on, and select the add-ons to be installed in the Install Add-on step.

    System resource add-ons must be installed. Advanced functional add-ons are optional.

    You can also install all add-ons after the cluster is created. To do so, choose Add-ons in the navigation pane of the CCE console and select the add-on you will install. For details, see Add-ons.

  5. Click Next: Confirm. Read the product instructions and select I am aware of the above limitations. Confirm the configured parameters, specifications, and fees.
  6. Click Submit.

    If the cluster is billed on a yearly/monthly basis, click Pay Now and follow on-screen prompts to pay the order.

    It takes about 6 to 10 minutes to create a cluster. You can click Back to Cluster List to perform other operations on the cluster or click Go to Cluster Events to view the cluster details. If the cluster status is Available, the cluster is successfully created.

Related Operations

  • Create a namespace. You can create multiple namespaces in a cluster and organize resources in the cluster into different namespaces. These namespaces serve as logical groups and can be managed separately. For more information about how to create a namespace for a cluster, see Namespaces.
  • Create a workload. Once the cluster is created, you can use an image to create an application that can be accessed from public networks. For details, see Creating a Deployment or Creating a StatefulSet.
  • Click the cluster name to view cluster details.
    Table 2 Cluster details

    Tab

    Description

    Cluster Details

    View the details and operating status of the cluster.

    Monitoring

    You can view the CPU and memory allocation rates of all nodes in the cluster (that is, the maximum allocated amount), as well as the CPU usage, memory usage, and specifications of the master node(s).

    Events

    • View cluster events on the Events tab page.
    • Set search criteria. For example, you can set the time segment or enter an event name to view corresponding events.

    Auto Scaling

    You can configure auto scaling to add or reduce worker nodes in a cluster to meet service requirements. For details, see Setting Cluster Auto Scaling.

    Clusters of v1.17 do not support auto scaling using AOM. You can use node pools for auto scaling. For details, see Node Pool Overview.

    kubectl

    To access a Kubernetes cluster from a PC, you need to use the Kubernetes command line tool kubectl. For details, see Connecting to a Cluster Using kubectl.