Updated on 2023-07-06 GMT+08:00

Migrating Clusters

Create VM clusters on the CCE 2.0 console. These new VM clusters should have the same specifications with those created on CCE 1.0.

To create clusters using APIs, see Cloud Container Engine API Reference 2.0.

Procedure

  1. Log in to the CCE console. In the navigation pane, choose Resource Management > Clusters. Click Create VM Cluster.
  2. Set cluster parameters. Parameters with * are mandatory.

    Table 1 Parameters for creating a cluster

    Parameter in CCE 2.0

    Parameter in CCE 1.0

    Configuration

    * Cluster Name

    Name

    Name of the cluster to be created.

    *Version

    This parameter does not exist in CCE 1.0. Retain the default value.

    Cluster version, namely, corresponding Kubernetes baseline version.

    *Management Scale

    This parameter does not exist in CCE 1.0. Set this parameter based on your requirements.

    Maximum number of nodes that can be managed by the cluster.

    * High Availability

    Cluster type

    • Yes: The cluster has three master nodes. The cluster is still available even when two master nodes are down.
    • No: The cluster has only one master node. If the master node is down, the whole cluster becomes unavailable, but existing applications are not affected.

    * VPC

    VPCs created in CCE 1.0 can be used in CCE 2.0.

    VPC where the cluster will be located.

    If no VPCs are available, click Create a VPC.

    * Subnet

    Subnets created in CCE 1.0 can be used in CCE 2.0.

    Subnet in which the cluster will run.

    *Network Model

    This parameter does not exist in CCE 1.0. Set this parameter based on your requirements.

    • Container tunnel network: indicates a network built on top of a VPC network and can be applied to common scenarios.
    • VPC network: delivers better performance and applies to high-performance and intensive interaction scenarios. Only one cluster using the VPC network model can be created in a single VPC.

    Container Network Segment

    This parameter does not exist in CCE 1.0. Set this parameter based on your requirements.

    An IP address range that can be allocated to container pods.

    • If Automatically select is deselected, enter a CIDR block manually. If the CIDR block you specify conflicts with a subnet CIDR block, the system prompts you to select another CIDR block. The recommended CIDR blocks are 10.0.0.0/12-19, 172.16.0.0/16-19, and 192.168.0.0/16-19.

      If different clusters share a container CIDR block, an IP address conflict will occur and access to the applications in the clusters may fail.

    • If Automatically select is selected, the system automatically assigns a CIDR block that does not conflict with any subnet CIDR block.

    The mask of the container CIDR block must be appropriate. It determines the number of available nodes in a cluster. A too small mask value will cause the cluster to soon fall short of nodes. After the mask is set, the estimated maximum number of nodes supported by the current CIDR block will be displayed.

    Service Network Segment

    This parameter does not exist in CCE 1.0. Set this parameter based on your requirements.

    This parameter is left unspecified, by default. This parameter applies only to clusters of v1.11.7 and later versions.

    This parameter indicates a CIDR block of Kubernetes services. The mask of the service CIDR block must be appropriate. It determines the number of available nodes in a cluster.

    Open EIP

    This parameter does not exist in CCE 1.0. Set this parameter based on your requirements.

    An independent public IP address that is reachable from public networks. Select an EIP that has not been bound to any node. A cluster's EIP is preset in the cluster's certificate. Do no delete the EIP after the cluster has been created. Otherwise, two-way authentication will fail.

    • Do not configure: The cluster's master node will not have an EIP.
    • Configure now: If no EIP is available for selection, create one.

    Authorization Mode

    This parameter does not exist in CCE 1.0. Set this parameter based on your requirements.

    By default, RBAC is selected. Read CCE Role Management Instructions and select I am aware of the above limitations and read the CCE Role Management Instructions.

    After RBAC is enabled, users access resources in the cluster according to fine-grained permissions policies.

    Authentication Mode

    This parameter does not exist in CCE 1.0. Set this parameter based on your requirements.

    Permission control on resources in a cluster. For example, you can configure user A to read and write application data in a namespace, while user B to only read resource data in a cluster.

    • By default, X.509 authentication instead of Enhanced authentication is enabled. X.509 is a standard defining the format of public key certificates. X.509 certificates are used in many Internet protocols.
    • If permission control on a cluster is required, select Enhanced authentication and then Authenticating Proxy.

      Click Upload next to CA Root Certificate to upload a valid certificate. Select the check box to confirm that the uploaded certificate is valid.

      If the certificate is invalid, the cluster cannot be created. The uploaded certificate file must be smaller than 1 MB and in .crt or .cer format.

    Cluster Description

    Description

    Description of the cluster.

  3. After the configuration is complete, click Next to add a node.
  4. Continue to add a node.
  5. Set the parameters based on Table 2.

    Table 2 Parameters for adding a node

    Parameter in CCE 2.0

    Parameter in CCE 1.0

    Configuration

    Scope

    Current Region

    Scope

    Physical location of the node.

    AZ.

    Physical region where resources use independent power supplies and networks. AZs are physically isolated but interconnected through an internal network.

    Specifications

    Node Name

    Specifications

    Name of the node.

    Node specifications.

    • General-purpose: provides general computing, storage, and network configurations for the majority of application scenarios. General-purpose instances can be used in web servers, development and test environments, and small database applications.
    • Memory-optimized: provides higher memory capacity than general-purpose nodes and is suitable for relational databases, NoSQL, and other workloads that are both memory-intensive and data-intensive.
    • General computing-plus: provides stable performance and exclusive resources to enterprise-class workloads with high and stable computing performance.
    • GPU-accelerated: provides powerful floating-point computing and is suitable for real-time, highly concurrent massive computing.

      Calculation scenario. Graphical processing units (GPUs) of P series are suitable for deep learning, scientific computing, and CAE. GPUs of G series are suitable for 3D animation rendering and CAD.

      Currently, only clusters of v1.11 support GPU-accelerated nodes. If the cluster version is v1.13 or later, GPU-accelerated is not displayed on the page.

    • Ultra-high I/O: provides ultra-low SSD access latency and ultra-high IOPS performance.

      This type of specifications is suitable for high-performance relational databases, NoSQL databases (such as Cassandra and MongoDB), and Elasticsearch.

    OS

    Select an operating system for the node pool.

    Reinstalling OSs or modifying OS configurations could make nodes unavailable. Exercise caution when performing these operations. For details, see Risky Operations on Cluster Nodes.

    Virtual Private Cloud

    This parameter does not exist in CCE 1.0. Set this parameter based on your requirements.

    The node pool inherits VPC settings from the cluster to which it belongs. This parameter is supported only in v1.13.10-r0 and later versions of clusters. It is not displayed in versions earlier than v1.13.10-r0.

    Subnet

    This parameter does not exist in CCE 1.0. Set this parameter based on your requirements.

    A subnet provides dedicated network resources that are logically isolated from other networks for network security.

    You can select any subnet in the cluster VPC. Cluster nodes can belong to different subnets. This parameter is supported only in v1.13.10-r0 and later versions of clusters. It is not displayed in versions earlier than v1.13.10-r0.

    Nodes

    Quantity

    Number of nodes to be created.

    Network

    NOTE:

    If the nodes to be created require public network access, select Automatically assign or Use existing for EIP. If an EIP is not bound to a node, applications running on the node cannot be accessed by the external network.

    EIP

    EIP

    A public IP address that is reachable from public networks.

    • Do Not Use: A node without an EIP cannot access the Internet. It can be used only as a cloud server for deploying services or clusters on a private network.
    • Automatically assign: An EIP with exclusive bandwidth is automatically assigned to each ECS. When creating an ECS, ensure that the EIP quota is sufficient. Set the specifications, required quantity, billing mode, and bandwidth as required.
    • Use Existing EIP: An existing EIP is assigned to the node.

    Disks

    Storage

    Disk type, which can be System Disk or Data Disk.
    • The system disk capacity ranges from 40 to 1024 GB. The default value is 40 GB.
    • The data disk capacity ranges from 100 to 32678 GB. The default value is 100 GB.

    Data disks deliver three levels of I/O performance:

    • Common I/O: SATA drives are used to store data. EVS disks of this level provide reliable block storage and a maximum IOPS of 1,000 per disk. They are suitable for key applications.
    • High I/O: SAS drives are used to store data. EVS disks of this level provide a maximum IOPS of 3,000 and a minimum read/write latency of 1 ms. They are suitable for RDS, NoSQL, data warehouse, and file system applications.
    • Ultra-high I/O: SSD drives are used to store data. EVS disks of this level provide a maximum IOPS of 20,000 and a minimum read/write latency of 1 ms. They are suitable for RDS, NoSQL, and data warehouse applications.

    Login information

    Key pair

    Key pair

    A key pair is used for identity authentication when you remotely log in to a node. If no key pair is available, click Create a key pair and create one.

    Advanced ECS Settings

    ECS Group

    This parameter does not exist in CCE 1.0. Set this parameter based on your requirements.

    Select an existing ECS group, or click Create ECS Group to create one. After the ECS group is created, click the refresh icon.

    An ECS group allows you to create ECSs on different hosts, thereby improving service reliability.

    Resource tag.

    By adding tags to resources, you can classify resources.

    You can create predefined tags in Tag Management Service (TMS). Predefined tags are visible to all service resources that support the tagging function. You can use predefined tags to improve tag creation and migration efficiency.

    CCE will automatically create the "CCE-Dynamic-Provisioning-Node=Node ID" tag. A maximum of 5 tags can be added.

    Agency Management

    The agency is created by the account administrator on the IAM console. By creating an agency, you can share your cloud server resources with another account, or entrust a more professional person or team to manage your resources. To authorize ECS or BMS to call cloud services, select Cloud service as the agency type, click Select, and then select ECS BMS.

    Script required before the installation.

    Script commands. Enter 0 to 1000 characters.

    The script will be executed before Kubernetes software is installed. Note that if the script is incorrect, Kubernetes software may not be installed successfully. The script is usually used to format data disks.

    Script required after the installation.

    Script commands. Enter 0 to 1000 characters.

    The script will be executed after Kubernetes software is installed and will not affect the installation. The script is usually used to modify Docker parameters.

    Add Data Disk

    Click Add Data Disk to add a data disk and set the capacity of the data disk. Enter a disk formatting command in the input box of Pre-installation Script.

    Subnet IP Address

    Select Automatically assign IP address (recommended) or Manually assigning IP addresses.

    Advanced Kubernetes Settings

    Maximum Instances

    This parameter does not exist in CCE 1.0. Set this parameter based on your requirements.

    The maximum number of pods that can be created on a node, including the system's default pods. Value range: 16 to 250.

    This limit prevents the node from being overloaded by managing too many pods.

    insecure-registries

    Click Add insecure-registry and enter a repository address.

    Add the address of the custom image repository with no valid SSL certificate to the docker startup option to avoid unsuccessful image pulling from the personal image repository. The address is in the format of IP address:Port number (or domain name). Post-installation script and insecure-registries cannot be used together.

    Maximum Data Space per Container

    This parameter does not exist in CCE 1.0. Set this parameter based on your requirements.

    The maximum data space that can be used by a container. Value range: 10 GB to 80 GB. If the value of this field is larger than the data disk space allocated to Docker resources, the latter will override the value specified here. Typically, 90% of the data disk space is allocated to Docker resources. This parameter is supported only in v1.13.10-r0 and later versions of clusters. It is not displayed in versions earlier than v1.13.10-r0.

  6. Click Next to install cluster add-ons.

    System resource add-ons must be installed. Advanced functional add-ons are optional.

    You can also install optional add-ons after the cluster is created. To do so, choose Add-ons in the navigation pane of the CCE console and select the add-on you will install. For details, see Add-on Management.

  7. Click Create Now. Check all the configurations, and click Submit.

    It takes about 6 to 10 minutes to create a cluster. Information indicating the progress of the creation process will be displayed.