Buying a Kunpeng Cluster

Containers in CCE's Kunpeng clusters can run on cloud servers that use Arm architecture and Kunpeng processors. Kunpeng-accelerated cloud servers are easy to deploy and provide comparable scaling and scheduling performance as x86-based cloud servers at only a fraction of what x86-based cloud servers would cost.

Kunpeng clusters are supported only in CN North-Beijing4, CN North-Ulanqab1, CN East-Shanghai1, CN East-Shanghai2, CN South-Guangzhou, AP-Singapore, and AF-Johannesburg.

Notes and Constraints

  • During the node creation, software packages are downloaded from OBS using the domain name. You need to use a private DNS server to resolve the OBS domain name, and configure the subnet where the node resides with a private DNS server address. When you create a subnet, the private DNS server is used by default. If you change the subnet DNS, ensure that the DNS server in use can resolve the OBS domain name.
  • You can create a maximum of 50 clusters in a single region. If more clusters are required, you can click here to increase your quota.
  • Kunpeng clusters do not support obsfs. Therefore, parallel file systems cannot be mounted.

Procedure

  1. Log in to the CCE console. In the navigation pane on the left, choose Resource Management > Clusters.
  2. In the Kunpeng Cluster card, click Buy.

    Figure 1 Buying a Kunpeng cluster

  3. Set cluster parameters. Pay attention to the parameters marked with an asterisk (*).

    Table 1 Parameters for creating a cluster

    Parameter

    Description

    Billing Mode

    • Yearly/Monthly: a prepaid billing mode suitable in scenarios where you have a good idea of what resources you will need during the billing period. Fees need to be paid in advance, but services will be less expensive. Yearly/monthly billed clusters cannot be deleted after creation. To stop using these clusters, go to the Billing Center and unsubscribe them.
    • Pay-per-use: a postpaid billing mode suitable in scenarios where resources will be billed based on usage frequency and duration. You can provision or delete resources at any time.

    Region

    To minimize network latency and resource access time, select the nearest region. Cloud resources are region-specific and cannot be used across regions through internal network connections.

    Enterprise project

    This parameter is displayed only for enterprise users who have enabled the enterprise project function. After an enterprise project (for example, default) is selected, the cluster, nodes in the cluster, cluster security groups, node security groups, and elastic IPs (EIPs) of the automatically created nodes will be created in this enterprise project. After a cluster is created, you are advised not to modify the enterprise projects of nodes, cluster security groups, and node security groups in the cluster.

    An enterprise project facilitates project-level management and grouping of cloud resources and users. For more information, see Enterprise Management.

    * Cluster Name

    Name of the new cluster, which cannot be changed after the cluster is created.

    A cluster name contains 4 to 128 characters starting with a letter and not ending with a hyphen (-). Only lowercase letters, digits, and hyphens (-) are allowed.

    Version

    Kubernetes community baseline version. The latest version is recommended. For details about version changelog, see Overview.

    If a Beta version is available, you can use it for trial. However, it is not recommended for commercial use.

    Management Scale

    Maximum number of worker nodes that can be managed by the master nodes of the current cluster. You can select 50 nodes, 200 nodes, or 1,000 nodes for your cluster. The management scale cannot be changed after the cluster is created.

    If you select 1000 nodes, the master nodes of the cluster can manage a maximum of 1000 worker nodes. The configuration fee varies depending on the specifications of master nodes for different management scales.

    Each cluster contains at least one master node and at least one worker node. A node is a cloud server.
    • Master node: a node that controls worker nodes in the cluster. The master node is automatically created along with the cluster, and manages and schedules the entire cluster.
    • Worker node: a node purchased or accepted into a cluster by the user. The master nodes control your workloads. When a worker node is down, the master node migrates your workloads to another worker node.

    Number of Master Nodes

    3: Three master nodes will be created. If a master node is faulty, the cluster can still be available without affecting service functions. Click Change. In the Disaster Recovery Settings dialog box, select a DR level.

    • AZ: Master nodes are deployed in different AZs for disaster recovery.
    • Fault domain: Master nodes are deployed in different failure domains in the same AZ for disaster recovery. This option is displayed only when the environment supports failure domains.
    • Host computer: Master nodes are deployed on different hosts in the same AZ for disaster recovery.
    • Customize: You can select different locations to deploy different master nodes. In the fault domain mode, master nodes must be in the same AZ.

    1: Only one master node is created in the cluster, which cannot ensure SLA for the cluster. Single-master clusters are not recommended for commercial scenarios. Click Change. In the AZ Settings dialog box, select an AZ for the master node.

    NOTE:
    • You are advised to create multiple master nodes to improve the cluster DR capability in commercial scenarios.
    • The multi-master mode cannot be changed after the cluster is created. A single-master cluster cannot be upgraded to a multi-master cluster. For a single-master cluster, if a master node is faulty, services will be affected.
    • To ensure reliability, the multi-master mode is enabled by default for a cluster with 1,000 or more nodes.

    *VPC

    VPC where the cluster is located. The value cannot be changed after the cluster is created.

    A VPC provides a secure and logically isolated network environment.

    If no VPC is available, click Create a VPC to create a VPC. After the VPC is created, click the refresh icon. For details, see Creating a VPC.

    *Subnet

    Subnet where the node VM runs. The value cannot be changed after the cluster is created.

    A subnet provides dedicated network resources that are logically isolated from other networks for network security.

    If no subnet is available, click Create Subnet to create a subnet. After the subnet is created, click the refresh icon. For details about the relationship between VPCs, subnets, and clusters, see Cluster Overview.

    Ensure that the DNS server in the subnet can resolve the OBS domain name. Otherwise, nodes cannot be created.

    Network Model

    After a cluster is created, the network model cannot be changed. Exercise caution when selecting a network model. For details about how to select a network model, see Selecting a Network Model When Creating a Cluster on CCE.

    VPC network

    In this network model, each node occupies one VPC route. The number of VPC routes supported by the current region and the number of container IP addresses that can be allocated to each node (that is, the maximum number of pods that can be created) are displayed on the console.

    • The container network uses VPC routes to integrate with the underlying network. This network model is applicable to performance-intensive scenarios. The maximum number of nodes allowed in a cluster depends on the route quota in a VPC network.
    • Each node is assigned a CIDR block of a fixed size. VPC networks are free from packet encapsulation overheads and outperform container tunnel networks. In addition, as VPC routing includes routes to node IP addresses and the container CIDR block, container pods in the cluster can be directly accessed from outside the cluster.
      NOTE:
      • In the VPC network model, extended CIDR blocks and network policies are not supported.
      • When creating multiple clusters using the VPC network model in one VPC, select a CIDR block for each cluster that does not overlap with the VPC address or other container CIDR blocks.

    Tunnel network

    Only nodes of the same type can be added when the tunnel network is used, that is, all nodes are VM nodes or bare metal nodes.

    • The container network is an overlay tunnel network on top of a VPC network and uses the VXLAN technology. This network model is applicable when there is no high requirements on performance.
    • VXLAN encapsulates Ethernet packets as UDP packets for tunnel transmission. Though at some cost of performance, the tunnel encapsulation enables higher interoperability and compatibility with advanced features (such as network policy-based isolation), meeting the requirements of most applications.

    Container Network Segment

    An IP address range that can be allocated to container pods. After the cluster is created, the value cannot be changed.

    • If Automatically select is deselected, enter a CIDR block manually. If the CIDR block you specify conflicts with a subnet CIDR block, the system prompts you to select another CIDR block. The recommended CIDR blocks are 10.0.0.0/8-18, 172.16.0.0/16-18, and 192.168.0.0/16-18.

      If different clusters share a container CIDR block, an IP address conflict will occur and access to the applications in the clusters may fail.

    • If Automatically select is selected, the system automatically assigns a CIDR block that does not conflict with any subnet CIDR block.

    The mask of the container CIDR block must be appropriate. It determines the number of available nodes in a cluster. A too small mask value will cause the cluster to soon fall short of nodes. After the mask is set, the estimated maximum number of nodes supported by the current CIDR block will be displayed. For details, see Which CIDR Blocks Does CCE Support?

    Service Network Segment

    An IP address range that can be allocated to Kubernetes Services. After the cluster is created, the value cannot be changed. The Service CIDR block cannot conflict with the created route. If they conflict, select another CIDR block.

    • Default: The default CIDR block 10.247.0.0/16 will be used.
    • Custom: Manually set a CIDR block and mask based on service requirements. The mask determines the maximum number of Service IP addresses available in the cluster.

    For details, see Which CIDR Blocks Does CCE Support?

    Authorization Mode

    RBAC is selected by default and cannot be deselected.

    After RBAC is enabled, IAM users access resources in the cluster according to fine-grained permissions policies.

    Authentication Mode

    The authentication mechanism controls user permission control on resources in a cluster.

    The X.509-based authentication mode is enabled by default. X.509 is a commonly used certificate format.

    If you want to perform permission control on the cluster, select Enhanced authentication. The cluster will identify users based on the header of the request for authentication.

    You need to upload your own CA certificate, client certificate, and client certificate private key (for details about how to create a certificate, see Certificates), and select I have confirmed that the uploaded certificates are valid.

    CAUTION:
    • Upload a file smaller than 1 MB. The CA certificate and client certificate can be in .crt or .cer format. The private key of the client certificate can only be uploaded unencrypted.
    • The validity period of the client certificate must be longer than five years.
    • The uploaded CA certificate is used for both the authentication proxy and the kube-apiserver aggregation layer configuration. If the certificate is invalid, the cluster cannot be created.

    Cluster Description

    Optional. Enter the description of the new container cluster.

    Advanced Settings

    Click Advanced Settings to expand the details page. The following functions are supported (unsupported functions in current AZs are hidden):

    Service Forwarding Mode

    • iptables: Traditional kube-proxy uses iptables rules to implement Service load balancing. In this mode, too many iptables rules will be generated when many Services are deployed. In addition, non-incremental updates will cause a latency and even tangible performance issues in the case of service traffic spikes.
    • ipvs: kube-proxy mode optimized by Huawei to achieve higher throughput and faster speed. This mode supports incremental updates and can keep connections uninterrupted during Service updates. It is suitable for large-sized clusters.

      In this mode, when the ingress and Service use the same ELB instance, the ingress cannot be accessed from the nodes and containers in the cluster.

    NOTE:
    • ipvs provides better scalability and performance for large clusters.
    • Compared with iptables, ipvs supports more complex load balancing algorithms such as least load first (LLF) and weighted least connections (WLC).
    • ipvs supports server health checking and connection retries.

    Resource Tags

    By adding tags to resources, you can classify resources.

    You can create predefined tags in Tag Management Service (TMS). Predefined tags are visible to all service resources that support the tagging function. You can use predefined tags to improve tag creation and migration efficiency. For details, see Creating Predefined Tags.

    CPU Policy

    This parameter is displayed only for clusters of v1.13.10-r0 and later.

    • On: Exclusive CPU cores can be allocated to workload pods. Select On if your workload is sensitive to latency in CPU cache and scheduling.
    • Off: Exclusive CPU cores will not be allocated to workload pods. Select Off if you want a large pool of shareable CPU cores.

    For details about CPU management policies, see Feature Highlight: CPU Manager.

    After CPU Policy is enabled, workloads cannot be started or created on nodes after the node specifications are changed. For details about how to solve this problem, see What Should I Do If I Fail to Restart or Create Workloads on a Node After Modifying the Node Specifications?

    Validity Period

    For a yearly/monthly billed cluster, set the required duration.

  4. Click Next: Create Node. On the Create Node page, set the following parameters.

    • Create Node
      • Create now: Create a node when creating a cluster. Currently, only VM nodes are supported. If a node fails to be created, the cluster will be rolled back.
      • Create later: No node will be created. Only an empty cluster will be created. After the cluster is created, you can add nodes that run on VMs or bare metal servers.
    • Billing Mode: Select Yearly/Monthly or Pay-per-use.
      • Yearly/Monthly: a prepaid billing mode, in which a resource is billed based on the purchase period. This mode is more cost-effective than the pay-per-use mode and applies if the resource usage period can be estimated.
      • Pay-per-use: a postpaid billing mode suitable in scenarios where resources will be billed based on usage duration. You can provision or delete resources at any time.

      Nodes created along with the cluster must inherit the billing mode from the cluster. For example, if the billing mode of the cluster is pay-per-use, then nodes created along with the cluster must be billed on the pay-per-use basis. For details, see Buying a Node.

      Yearly/monthly billed nodes cannot be deleted after creation. To stop using these nodes, go to the Billing Center and unsubscribe them.

    • Current Region: geographic location of the nodes to be created.
    • AZ: Set this parameter based on the site requirements. An AZ is a physical region where resources use independent power supply and networks. AZs are physically isolated but interconnected through an internal network.
      You are advised to deploy worker nodes in different AZs after the cluster is created to make your workloads more reliable. When creating a cluster, you can deploy nodes only in one AZ.
      Figure 2 Worker nodes in different AZs
    • Node Type: VM node is selected by default.
    • Node Name: Enter a node name. A node name contains 1 to 56 characters starting with a lowercase letter and not ending with a hyphen (-). Only lowercase letters, digits, and hyphens (-) are allowed.

      If you change the node name on the ECS console after the node is created, be sure to synchronize the new node name from ECS to CCE. For details, see Synchronizing Node Data.

    • Specifications: Select node specifications that best fit your business needs.
      • Kunpeng general computing-plus: This type of specifications is suitable for governments, enterprises, and the financial industry with strict requirements on security and privacy, for Internet applications with high requirements on network performance, for big data and HPC requiring a large number of vCPUs, and for website setups and e-Commerce requiring cost-effectiveness.
      • Kunpeng memory-optimized: This type of specifications is suited for large-memory datasets with high network performance requirements. Using Huawei-proprietary Kunpeng 920 processors and high-speed intelligent Hi1822 NICs, the KM1 ECSs provide a maximum memory size of 480 GB based on DDR4 for large-memory applications with high requirements on network performance.
      • Ascend-accelerated: Ascend-accelerated nodes powered by HiSilicon Ascend 310 AI processors are applicable to scenarios such as image recognition, video processing, inference computing, and machine learning.
        • Ascend-accelerated nodes are available only in certain AZs.
        • Before using Ascend-accelerated nodes, you must install the huawei-npu add-on to ensure that workloads using Ascend 310 processors can run properly. Click here to install the add-on.
        • After the nodes are created, the Ascend 310 processor driver is installed and the nodes are automatically restarted. During the restart, the nodes are temporarily unavailable. They will automatically recover after the restart.
      Figure 3 Selecting node specifications

      To ensure node stability, CCE automatically reserves some resources to run necessary system components. For details, see Formula for Calculating the Reserved Resources of a Node.

    • OS: Select the operating system (OS) of the nodes to be created.
    • System Disk: Set the system disk space of the worker node. The value ranges from 40GB to 1024 GB. The default value is 40GB.

      By default, system disks support High I/O (SAS) and Ultra-high I/O (SSD) EVS disks. For details, see EVS Disk Overview.

      Encryption: Data disk encryption safeguards your data. Snapshots generated from encrypted disks and disks created using these snapshots automatically inherit the encryption function. This function is available only in certain regions.
      • Encryption is not selected by default.
      • After you select Encryption, you can select an existing key in the displayed Encryption Setting dialog box. If no key is available, click the link next to the drop-down box to create a key. After the key is created, click the refresh icon.
    • Data Disk: Set the data disk space of the worker node. The value ranges from 100 GB to 32,768 GB. The default value is 100 GB. The types of EVS disks supported for data disks are the same as those for system disks.

      If the data disk is uninstalled or damaged, the Docker service becomes abnormal and the node becomes unavailable. You are advised not to delete the data disk.

      • Add Data Disk: Currently, a maximum of two data disks can be attached to a node. After the node is created, you can go to the ECS console to attach more data disks.
      • LVM: If this option is selected, CCE data disks are managed by the Logical Volume Manager (LVM). On this condition, you can adjust the disk space allocation for different resources. This option is selected for the first disk by default and cannot be unselected. You can choose to enable or disable LVM for new data disks.
        • This option is selected by default, indicating that LVM management is enabled.
        • You can deselect the check box to disable LVM management.
          • Disk space of the data disks managed by LVM will be allocated according to the ratio you set.
          • When creating a node in a cluster of v1.13.10 or later, if LVM is not selected for a data disk, follow instructions in Adding a Second Data Disk to a Node in a CCE Cluster to fill in the pre-installation script and format the data disk. Otherwise, the data disk will still be managed by LVM.
          • When creating a node in a cluster earlier than v1.13.10, you must format the data disks that are not managed by LVM. Otherwise, either these data disks or the first data disk will be managed by LVM.
          • By default, disks run in the direct-lvm mode. If data disks are removed, the loop-lvm mode will be used and this will impair system stability.
      • Encryption: Data disk encryption safeguards your data. Snapshots generated from encrypted disks and disks created using these snapshots automatically inherit the encryption function.
        This function is supported only for clusters of v1.13.10 or later in certain regions, and is not displayed for clusters of v1.13.10 or earlier.
        • Encryption is not selected by default.
        • After you select Encryption, you can select an existing key in the displayed Encryption Setting dialog box. If no key is available, click the link next to the drop-down box to create a key. After the key is created, click the refresh icon.
      • Data disk space allocation: Click to specify the resource ratio for Kubernetes Space and User Space.
        • Kubernetes Space: You can specify the ratio of the data disk space for storing Docker and kubelet resources. Docker resources include the Docker working directory, Docker images, and image metadata. kubelet resources include pod configuration files, secrets, and emptyDirs.
        • User Space: You can set the ratio of the disk space that is not allocated to Kubernetes resources and the path to which the user space is mounted.
        • The ratio of disk space allocated to the Kubernetes space and user space must be equal to 100% in total. You can click to refresh the data after you have modified the ratio.
        • Path inside a node cannot be set to the root directory /. Otherwise, the mounting fails. Mount paths can be as follows:
          • /opt/xxxx (excluding /opt/cloud)
          • /mnt/xxxx (excluding /mnt/paas)
          • /tmp/xxx
          • /var/xxx (excluding key directories such as /var/lib, /var/script, and /var/paas)
          • /xxxx (It cannot conflict with the system directory, such as bin, lib, home, root, boot, dev, etc, lost+found, mnt, proc, sbin, srv, tmp, var, media, opt, selinux, sys and usr.)

          Do not set this parameter to /home/paas, /var/paas, /var/lib, /var/script, /mnt/paas, or /opt/cloud. Otherwise, the system or node installation will fail.

    • VPC: A VPC where the current cluster is located. This parameter cannot be changed and is displayed only for clusters of v1.13.10-r0 or later.
    • Subnet: A subnet improves network security by providing exclusive network resources that are isolated from other networks. You can select any subnet in the cluster VPC. Cluster nodes can belong to different subnets.

      During the node creation, software packages are downloaded from OBS using the domain name. You need to use a private DNS server to resolve the OBS domain name, and configure the subnet where the node resides with a private DNS server address. When you create a subnet, the private DNS server is used by default. If you change the subnet DNS, ensure that the DNS server in use can resolve the OBS domain name.

      When a node is added to an existing cluster, if an extended CIDR block is added to the VPC corresponding to the subnet and the subnet is an extended CIDR block, you need to add the following three security group rules to the master node security group (the group name is in the format of Cluster name-cce-control-Random number). These rules ensure that the nodes added to the cluster are available. (This step is not required if an extended CIDR block has been added to the VPC during cluster creation.)

    • EIP: an independent public IP address. If the nodes to be created require public network access, select Automatically assign or Use existing. This parameter is not displayed when IPv6 is enabled for the cluster.
      An EIP bound to the node allows public network access. EIP bandwidth can be modified at any time. An ECS without a bound EIP cannot access the Internet or be accessed by public networks. For details, see EIP Overview.
      • Do not use: A node without an EIP cannot be accessed from public networks. It can be used only as a cloud server for deploying services or clusters on a private network.
      • Automatically assign: An EIP with specified configurations is automatically assigned to each node. If the number of EIPs is less than the number of nodes, the EIPs are randomly bound to the nodes.

        Configure the EIP specifications, billing factor, bandwidth type, and bandwidth size as required. When creating an ECS, ensure that the elastic IP address quota is sufficient.

      • Use existing: Existing EIPs are assigned to the nodes to be created.

      By default, VPC's SNAT feature is disabled for CCE. If SNAT is enabled, you do not need to use EIPs to access public networks. For details about SNAT, see Custom Policies.

    • Shared Bandwidth: Select Do not use or Use existing. This parameter is displayed only when IPv6 is enabled for the cluster.

      An EIP bound to the node allows public network access. EIP bandwidth can be modified at any time. An ECS without a bound EIP cannot access the Internet or be accessed by public networks.

    • Login Mode: You can use a password or key pair.
      • Password: The default username is root. Enter the password for logging in to the node and confirm the password.

        Be sure to remember the password as you will need it when you log in to the node.

      • Key pair: Select the key pair used to log in to the node. You can select a shared key.

        A key pair is used for identity authentication when you remotely log in to a node. If no key pair is available, click Create a key pair. For details on how to create a key pair, see Creating a Key Pair.

        When creating a node using a key pair, IAM users can select only the key pairs created by their own, regardless of whether these users are in the same group. For example, user B cannot use the key pair created by user A to create a node, and the key pair is not displayed in the drop-down list on the CCE console.

        Figure 4 Key pair
    • Advanced ECS Settings (optional): Click to show advanced ECS settings.
      • ECS Group: An ECS group logically groups ECSs. The ECSs in the same ECS group comply with the same policy associated with the ECS group.
        • Anti-affinity: ECSs in an ECS group are deployed on different physical hosts to improve service reliability.
        • Fault domain: ECSs in an ECS group are deployed in multiple failure domains so that a failure in one failure domain will not affect the ECSs in other failure domains, thereby improving service reliability. This option is displayed only when the environment supports failure domains. This option is not supported if a worker node is deployed in a random AZ.

        Select an existing ECS group, or click Create ECS Group to create one. After the ECS group is created, click the refresh button.

      • Resource Tags: By adding tags to resources, you can classify resources.

        You can create predefined tags in Tag Management Service (TMS). Predefined tags are visible to all service resources that support the tagging function. You can use predefined tags to improve tag creation and migration efficiency. For details, see Creating Predefined Tags.

        CCE will automatically create the "CCE-Dynamic-Provisioning-Node=node id" tag. A maximum of 5 tags can be added.

      • Agency: An agency is created by a tenant administrator on the IAM console. By creating an agency, you can share your cloud server resources with another account, or entrust a more professional person or team to manage your resources. For details on how to create an agency, see Cloud Service Delegation. To authorize an ECS or BMS to call cloud services, select Cloud service as the agency type, click Select, and then select ECS BMS.
      • Pre-installation Script: Enter a maximum of 1,000 characters.

        The script will be executed before Kubernetes software is installed. Note that if the script is incorrect, Kubernetes software may fail to be installed. The script is usually used to format data disks.

      • Post-installation Script: Enter a maximum of 1,000 characters.

        The script will be executed after Kubernetes software is installed and will not affect the installation. The script is usually used to modify Docker parameters.

      • Subnet IP Address: Select Automatically assign IP address (recommended) or Manually assigning IP addresses.
    • Advanced Kubernetes Settings: (Optional) Click to show advanced cluster settings.
      • Max Pods: maximum number of pods that can be created on a node, including the system's default pods. If the cluster uses the VPC network model, the maximum value is determined by the number of IP addresses that can be allocated to containers on each node.

        This limit prevents the node from being overloaded by managing too many pods. For details, see Maximum Number of Pods That Can Be Created on a Node.

      • Maximum Data Space per Container: maximum data space that can be used by a container. The value ranges from 10 GB to 500 GB. If the value of this field is larger than the data disk space allocated to Docker resources, the latter will override the value specified here. Typically, 90% of the data disk space is allocated to Docker resources. This parameter is displayed only for clusters of v1.13.10-r0 and later.
    • Nodes: The value cannot exceed the management scale you select when configuring cluster parameters. Set this parameter based on service requirements and the remaining quota displayed on the page. Click to view the factors that affect the number of nodes to be added (depending on the factor with the minimum value). To apply for more quotas, click Increase quota.
    • Validity Period: If the cluster billing mode is yearly/monthly, set the number of months or years for which you will use the new node.

  5. Click Next: Install Add-on, and select the add-ons to be installed in the Install Add-on step.

    System resource add-ons must be installed. Advanced functional add-ons are optional.

    You can also install all add-ons after the cluster is created. To do so, choose Add-ons in the navigation pane of the CCE console and select the add-on you will install. For details, see Add-ons.

  6. Click Next: Confirm. Read the product instructions and select I am aware of the above limitations. Confirm the configured parameters, specifications, and fees.
  7. Click Submit.

    If the cluster is billed on a yearly/monthly basis, click Pay Now and follow on-screen prompts to pay the order.

    It takes about 6 to 10 minutes to create a cluster. You can click Back to Cluster List to perform other operations on the cluster or click Go to Cluster Events to view the cluster details.

  8. If the cluster status is Available, the Kunpeng cluster is successfully created, and the Kunpeng icon is displayed in front of the cluster name.

Related Operations

  • Create a namespace. You can create multiple namespaces in a cluster and organize resources in the cluster into different namespaces. These namespaces serve as logical groups and can be managed separately. For more information about how to create a namespace for a cluster, see Namespaces.
  • Create a workload. Once the cluster is created, you can use an image to create an application that can be accessed from public networks. For details, see Creating a Deployment or Creating a StatefulSet.
  • Click the cluster name to view cluster details.
    Table 2 Cluster details

    Tab

    Description

    Cluster Details

    View the details and operating status of the cluster.

    Monitoring

    You can view the CPU and memory allocation rates of all nodes in the cluster (that is, the maximum allocated amount), as well as the CPU usage, memory usage, and specifications of the master node(s).

    Events

    • View cluster events on the Events tab page.
    • Set search criteria. For example, you can set the time segment or enter an event name to view corresponding events.

    Auto Scaling

    You can configure auto scaling to add or reduce worker nodes in a cluster to meet service requirements. For details, see Setting Cluster Auto Scaling.

    Clusters of v1.17 do not support auto scaling using AOM. You can use node pools for auto scaling. For details, see Node Pool Overview.

    kubectl

    To access a Kubernetes cluster from a PC, you need to use the Kubernetes command line tool kubectl. For details, see Connecting to a Cluster Using kubectl.

    Resource Tags

    You can add resource tags to classify resources.

    You can create predefined tags in Tag Management Service (TMS). Predefined tags are visible to all service resources that support the tagging function. You can use predefined tags to improve tag creation and migration efficiency. For details, see Creating Predefined Tags.

    CCE will automatically create the "CCE-Dynamic-Provisioning-Node=Node ID" tag. A maximum of 5 tags can be added.

    Istioctl

    After the Istio service mesh function is enabled for a cluster, you can use Istio command line tool Istioctl to configure routing policies to manage service traffic, including traffic shifting, fault injection, rate limiting, and circuit breaker. For details, see Enabling Istio.