Performing Replace/Rolling Upgrade (v1.13 and Earlier)
Scenario
You can upgrade your clusters to a newer Kubernetes version on the CCE console.
Before the upgrade, learn about the target version to which each CCE cluster can be upgraded in what ways, and the upgrade impacts. For details, see Overview and Before You Start.
Precautions
- If the coredns add-on needs to be upgraded during the cluster upgrade, ensure that the number of nodes is greater than or equal to the number of coredns instances and all coredns instances are running. Otherwise, the upgrade will fail. Before upgrading a cluster of v1.11 or v1.13, you need to upgrade the coredns add-on to the latest version available for the cluster.
- When a cluster of v1.11 or earlier is upgraded to v1.13, the impacts on the cluster are as follows:
- All cluster nodes will be restarted as their OSs are upgraded, which affects application running.
- The cluster signing certificate mechanism is changed. As a result, the original cluster certificate becomes invalid. You need to obtain the certificate or kubeconfig file again after the cluster is upgraded.
- During the upgrade from one release of v1.13 to a later release of v1.13, applications in the cluster are interrupted for a short period of time only during the upgrade of network components.
- During the upgrade from Kubernetes 1.9 to 1.11, the kube-dns of the cluster will be uninstalled and replaced with CoreDNS, which may cause loss of the cascading DNS configuration in the kube-dns or temporary interruption of the DNS service. Back up the DNS address configured in the kube-dns so you can configure the domain name in the CoreDNS again when domain name resolution is abnormal.
Procedure
- Log in to the CCE console. In the navigation pane, choose Resource Management > Clusters. In the cluster list, check the cluster version.
- Click More for the cluster you want to upgrade, and select Upgrade from the drop-down menu. Figure 1 Upgrading a cluster
- If your cluster version is up-to-date, the Upgrade button is grayed out.
- If the cluster status is Unavailable, the upgrade flag in the upper right corner of the cluster card view will be grayed out. Check the cluster status by referring to Before You Start.
- In the displayed Pre-upgrade Check dialog box, click Check Now. Figure 2 Pre-upgrade check
- The pre-upgrade check starts. While the pre-upgrade check is in progress, the cluster status will change to Pre-checking and new nodes/applications will not be able to be deployed on the cluster. However, existing nodes and applications will not be affected. It takes 3 to 5 minutes to complete the pre-upgrade check. Figure 3 Pre-upgrade check in process
- When the status of the pre-upgrade check is Completed, click Upgrade. Figure 4 Pre-upgrade check completed
- On the cluster upgrade page, review or configure basic information by referring to Table 1.
Table 1 Basic information Parameter
Description
Cluster Name
Review the name of the cluster to be upgraded.
Current Version
Review the version of the cluster to be upgraded.
Target Version
Review the target version after the upgrade.
Node Upgrade Policy
Replace (replace upgrade): Worker nodes will be reset. Their OSs will be reinstalled, and data on the system and data disks will be cleared. Exercise caution when performing this operation.
NOTE:- The lifecycle management function of the nodes and workloads in the cluster is unavailable.
- APIs cannot be called temporarily.
- Running workloads will be interrupted because nodes are reset during the upgrade.
- Data in the system and data disks on the worker nodes will be cleared. Back up important data before resetting the nodes.
- Data disks without LVM mounted to worker nodes need to be mounted again after the upgrade, and data on the disks will not be lost during the upgrade.
- The EVS disk quota must be greater than 0.
- The container IP addresses change, but the communication between containers is not affected.
- Custom labels on the worker nodes will be cleared.
- It takes about 20 minutes to upgrade a master node and about 30 to 120 minutes to upgrade worker nodes (about 3 minutes for each worker node), depending on the number of worker nodes and upgrade batches.
Gray (rolling upgrade): Worker nodes are upgraded in rolling mode in a node pool. This mode applies to scenarios where all nodes in a cluster are created from a node pool.
NOTE:- The lifecycle management function of the nodes and workloads in the cluster is unavailable.
- APIs cannot be called temporarily.
- Running workloads are not interrupted.
- It takes about 20 minutes to upgrade a master node and about 30 to 120 minutes to upgrade worker nodes, depending on the number of worker nodes and upgrade batches. The service migration duration is determined by you.
Resetting Node Image
Supported only for BMS nodes.
The OS image of a BMS node can be replaced during the upgrade. You can specify a new image for the node. During the upgrade, the new image will be used to reinstall the OS. If no new image is specified, the original image is used to reinstall the OS by default.
Login Mode
You can use a password or key pair.- If the login mode is Password: The default username is root. Enter the password for logging in to the node and confirm the password.
Remember the password as you will need it when you log in to the node.
- Key pair: Select the key pair used to log in to the node. You can select a shared key.
A key pair is used for identity authentication when you remotely log in to a node. If no key pair is available, click Create a key pair. For details on how to create a key pair, see Creating a Key Pair.
Cluster Backup
Indicates whether to back up the entire master node of the cluster. A manual confirmation is required. The backup process uses the Cloud Backup and Recovery (CBR) service and takes about 20 minutes. If there are many cloud backup tasks at the current site, the backup time may be prolonged. You are advised to back up the entire master node.
Node Upgrade Priority
You can select the nodes to be upgraded first.
- Click Next. In the dialog box displayed, click OK. The message displayed varies depending on the node upgrade policy you selected.
- Replace: After the upgrade, the cluster uses OSs of a later version. During the upgrade, nodes are restarted and the OSs are upgraded, which interrupt services.
- Gray: You need to reset the nodes (and remove the labels that make the nodes unschedulable for pods) or create nodes to complete the rolling upgrade.
- Upgrade add-ons. If an add-on needs to be upgraded, a red dot is displayed. Click the Upgrade button in the lower left corner of the add-on card view. After the upgrade is complete, click Upgrade in the lower right corner of the page.
- Master nodes will be upgraded first, and then the worker nodes will be upgraded concurrently. If there are a large number of worker nodes, they will be upgraded in different batches.
- Select a proper time window for the upgrade to reduce impacts on services.
- Clicking OK will start the upgrade immediately, and the upgrade cannot be canceled. Do not shut down or restart nodes during the upgrade.
- In the displayed Upgrade dialog box, read the information and click OK. Note that the cluster cannot be rolled back after the upgrade. Figure 5 Confirming cluster upgrade
- Back to the cluster list, you can see that the cluster status is Upgrading. Wait until the upgrade is completed. After the upgrade is successful, you can view the cluster status and version on the cluster list or cluster details page.Figure 6 Verifying the upgrade success
Last Article: Before You Start
Next Article: Performing In-place Upgrade (v1.15 and Later)
Did this article solve your problem?
Thank you for your score!Your feedback would help us improve the website.