Help Center/ Cloud Container Engine/ User Guide/ Old Console/ Nodes/ Performing Rolling Upgrade for Nodes
Updated on 2022-12-30 GMT+08:00

Performing Rolling Upgrade for Nodes

Scenario

In a rolling upgrade, a new node is created, existing workloads are migrated to the new node, and then the old node is deleted. Figure 1 shows the migration process.

Figure 1 Workload migration

Notes and Constraints

  • The original node and the target node to which the workload is to be migrated must be in the same cluster.
  • The cluster must be of v1.13.10 or later.
  • The default node pool DefaultPool does not support this configuration.

Scenario 1: The Original Node Is in DefaultPool

  1. Create a node.

    1. Log in to the CCE console. In the navigation pane, choose Resource Management > Node Pools.
    2. Select the cluster to which the original node belongs.
    3. Click Create Node Pool, set the following parameters, and modify other parameters as required. For details about the parameters, see Creating a Node Pool.
      1. Name: Enter the name of the new node pool, for example, nodepool-demo.
      2. Nodes: In this example, add one node.
      3. Specifications: Select node specifications that best suit your needs.
      4. OS: Select the operating system (OS) of the nodes to be created.
      5. Login Mode: You can use a password or key pair.
        • If the login mode is Password: The default username is root. Enter the password for logging in to the node and confirm the password.

          Remember the password as you will need it when you log in to the node.

        • If the login mode is Key pair, select a key pair for logging in to the node and select the check box to acknowledge that you have obtained the key file and that without this file you will not be able to log in to the node.

          A key pair is used for identity authentication when you remotely log in to a node. If no key pair is available, click Create a key pair.

    4. Click Next: Confirm. Confirm the node pool configuration and click Submit.

      Go back to the node pool list. In the node list, you can view that the new node pool has been created and is in the Normal state.

  2. Click the name of the node pool. The IP address of the new node is displayed in the node list.
  1. Install and configure kubectl.

    1. In the navigation pane of the CCE console, choose Resource Management > Clusters, and click Command Line Tool > Kubectl under the cluster where the original node is located.
    2. On the Kubectl tab page of the cluster details page, connect to the cluster as prompted.

  1. Migrate the workload.

    1. Add a taint to the node where the workload needs to be migrated out.

      kubectl taint node [node] key=value:[effect]

      In the preceding command, [node] indicates the IP address of the node where the workload to be migrated is located. The value of [effect] can be NoSchedule, PreferNoSchedule, or NoExecute. In this example, set this parameter to NoSchedule.

      • NoSchedule: Pods that do not tolerate this taint are not scheduled on the node; existing pods are not evicted from the node.
      • PreferNoSchedule: Kubernetes tries to avoid scheduling pods that do not tolerate this taint onto the node.
      • NoExecute: A pod is evicted from the node if it is already running on the node, and is not scheduled onto the node if it is not yet running on the node.

      To reset a taint, run the kubectl taint node [node] key:[effect]- command to remove the taint.

    2. Safely evicts the workload on the node.

      kubectl drain [node]

      In the preceding command, [node] indicates the IP address of the node where the workload to be migrated is located.

    3. In the navigation pane of the CCE console, choose Workloads > Deployments. In the workload list, the status of the workload to be migrated changes from Running to Unready. If the workload status changes to Running again, the migration is successful.

    During workload migration, if node affinity is configured for the workload, the workload keeps displaying a message indicating that the workload is not ready. In this case, click the workload name to go to the workload details page. On the Scheduling Policies tab page, delete the affinity configuration of the original node and click Add Simple Scheduling Policy to configure the affinity and anti-affinity policies of the new node. For details, see Simple Scheduling Policies.

    After the workload is successfully migrated, you can view that the workload is migrated to the node created in 1 on the Pods tab page of the workload details page.

  1. Delete the original node.

    After the workload is successfully migrated and is running properly, choose Resource Management > Nodes to delete the original node.

Scenario 2: The Original Node Is Not in DefaultPool

  1. Copy the node pool and add nodes to it.

    1. Log in to the CCE console. In the navigation pane, choose Resource Management > Node Pools.
    2. Select the cluster to which the original node belongs.

      In the node pool list, locate the node pool to which the original node belongs.

    3. Click More > Copy next to the node pool name. On the Create Node Pool page, set the following parameters and modify other parameters as required. For details about the parameters, see Creating a Node Pool.
      • Name: Enter the name of the new node pool, for example, nodepool-demo.
      • Nodes: In this example, add one node.
      • Specifications: Select node specifications that best suit your needs.
      • OS: Select the operating system (OS) of the nodes to be created.
      • Login Mode: You can use a password or key pair.
        • If the login mode is Password: The default username is root. Enter the password for logging in to the node and confirm the password.

          Remember the password as you will need it when you log in to the node.

        • If the login mode is Key pair, select a key pair for logging in to the node and select the check box to acknowledge that you have obtained the key file and that without this file you will not be able to log in to the node.

          A key pair is used for identity authentication when you remotely log in to a node. If no key pair is available, click Create a key pair.

    4. Click Next: Confirm. Confirm the node pool configuration and click Submit.

      Go back to the node pool list. In the node list, you can view that the new node pool has been created and is in the Normal state.

  2. Click the name of the node pool. The IP address of the new node is displayed in the node list.
  1. Migrate the workload.

    1. Click Edit on the right of nodepool-demo and set Taints.
    2. Click Add Taint, set Key and Value, and set Effect to NoExecute. The value options of Effect include NoSchedule, PreferNoSchedule, or NoExecute.
      • NoSchedule: Pods that do not tolerate this taint are not scheduled on the node; existing pods are not evicted from the node.
      • PreferNoSchedule: Kubernetes tries to avoid scheduling pods that do not tolerate this taint onto the node.
      • NoExecute: A pod is evicted from the node if it is already running on the node, and is not scheduled onto the node if it is not yet running on the node.

      If you need to reset the taint, enter the new values or click Delete.

    3. Click Save.
    4. In the navigation pane of the CCE console, choose Workloads > Deployments. In the workload list, the status of the workload to be migrated changes from Running to Unready. If the workload status changes to Running again, the migration is successful.

    During workload migration, if node affinity is configured for the workload, the workload keeps displaying a message indicating that the workload is not ready. In this case, click the workload name to go to the workload details page. On the Scheduling Policies tab page, delete the affinity configuration of the original node and click Add Simple Scheduling Policy to configure the affinity and anti-affinity policies of the new node. For details, see Simple Scheduling Policies.

    After the workload is successfully migrated, you can view that the workload is migrated to the node created in 1 on the Pods tab page of the workload details page.

  1. Delete the original node.

    After the workload is successfully migrated and is running properly, choose Resource Management > Node Pools to delete the original node.