Upgrading Versions
Same version upgrade and cross-version upgrade are supported. Same version upgrade is to upgrade the kernel patch of a cluster to fix problems or optimize performance. Cross-version upgrade is to upgrade the cluster version to enhance functions or incorporate versions.
Description
Principle
Nodes in the cluster are upgraded one by one so that services are not interrupted. The upgrade process is as follows: bring a node offline, migrate its data to another node, create a new node of the target version, and mount the NIC ports of the offline node to the new node to retain the node IP address. After a new node is added to the cluster, other nodes will be updated in the same way in sequence. If there is a large amount of data in a cluster, the upgrade duration depends on the data migration duration.
Process
Version Restrictions
Current Version |
Target Version |
---|---|
7.1.1 |
7.6.2, 7.10.2 |
7.6.2 |
7.10.2 |
7.9.3 |
7.10.2 |
Note:
|
Constraints
- A maximum of 20 clusters can be upgraded at the same time. You are advised to perform the upgrade during off-peak hours.
- Clusters that have ongoing tasks cannot be upgraded.
- Once started, an upgrade task cannot be stopped until it succeeds or fails.
- During the upgrade, nodes are replaced one by one. Requests sent to a node that is being replaced may fail. In this case, you are advised to access the cluster through the VPC Endpoint service or a dedicated load balancer.
- During the upgrade, the Kibana and Cerebro components will be rebuilt and cannot be accessed. Different Kibana versions are incompatible with each other. During the upgrade, you may fail to access Kibana due to version incompatibility. A cluster can be accessed after it is successfully upgraded.
Pre-Upgrade Check
To ensure a successful upgrade, you must check the items listed in the following table before performing an upgrade.
Check Item |
Check Method |
Description |
Normal Status |
---|---|---|---|
Cluster status |
System check |
After an upgrade task is started, the system automatically checks the cluster status. Clusters whose status is green or yellow can provide services properly and have no unallocated primary shards. |
The cluster status is Available. |
Node quantity |
System check |
After an upgrade task is started, the system automatically checks the number of nodes. The total number of data nodes and cold data nodes in a cluster must be greater than or equal to 3 so that services will not be interrupted. |
The total number of data nodes and cold data nodes in a cluster must be greater than or equal to 3. |
Disk capacity |
System check |
After an upgrade task is started, the system automatically checks the disk capacity. During the upgrade, nodes are brought offline one by one and then new nodes are created. Ensure that the disk capacity of all the remaining nodes can process all data of the node that has been brought offline. |
After a node is brought offline, the remaining nodes can contain all data of the cluster. |
Data backup |
System check |
Check whether the maximum number of primary and standby shards of indexes in a cluster can be allocated to the remaining data nodes and cold data nodes. Prevent backup allocation failures after a node is brought offline during the upgrade. |
Maximum number of primary and standby index shards plus 1 must be less than or equal to the total number of data nodes and cold data nodes before the upgrade. |
Data backup |
Manual check |
Before the upgrade, back up data to prevent data loss caused by upgrade faults. When submitting an upgrade task, you can determine whether to enable the system to check for the backup of all indexes. |
Check whether data has been backed up. |
Resources |
System check |
After an upgrade task is started, the system automatically checks resources. Resources will be created during the upgrade. Ensure that resources are available. |
Resources are available and sufficient. |
Custom plugins |
System and manual check |
Perform this check only when custom plugins are installed in the source cluster. If a cluster has a custom plugin, upload the plugin package of the target version to a specified directory before the upgrade. During the upgrade, install the custom plugin in the new nodes. After an upgrade task is started, the system automatically checks whether the custom plugin package has been uploaded, but you need to check whether the uploaded plugin package is correct.
NOTE:
An incorrect or incompatible plugin package cannot be automatically installed during the upgrade, but this does not affect the upgrade task. After the upgrade is complete, the status of the custom plugin is reset to Uploaded. |
The plugin package of the cluster to be upgraded has been uploaded to the plugin list. |
Custom configurations |
System check |
During the upgrade, the system automatically synchronizes the content of the cluster configuration file elasticsearch.yml. |
Clusters' custom configurations are not lost after the upgrade. |
Non-standard operations |
Manual check |
Check whether non-standard operations are contained in the upgrade. Non-standard operations refer to manual operations that are not recorded. These operations cannot be automatically transferred during the upgrade, for example, modification of the Kibana.yml configuration file, system configuration, and route return. |
Some non-standard operations are compatible. For example, the modification of a security plugin can be retained through metadata, and the modification of system configuration can be retained using images. Some non-standard operations, such as the modification of the kibana.yml file, cannot be retained, and you must back up the file in advance. |
Compatibility check |
System and manual check |
After a cross-version upgrade task is started, the system automatically checks whether the source and target versions have incompatible configurations. If a custom plugin is installed for a cluster, the version compatibility of the custom plugin needs to be manually checked. |
Configurations before and after the cross-version upgrade are compatible. |
Creating an Upgrade Task
- Log in to the CSS management console.
- In the navigation pane on the left, choose Clusters. On the cluster list page that is displayed, click the name of a cluster.
- On the displayed basic cluster information page, click Version Upgrade.
- On the displayed page, set upgrade parameters.
Table 3 Upgrade parameters Parameter
Description
Upgrade Type
- Same version upgrade: Upgrade the kernel patch of the cluster. The cluster version number remains unchanged.
- Cross version upgrade: Upgrade the cluster version.
Target Image
Image of the target version. When you select an image, the image name and target version details are displayed.
Agency
Select an IAM agency to grant the upgrade permission to the current account.
If no agency is available, click Create Agency to go to the IAM console and create an agency.
NOTE:The selected agency must be assigned the Tenant Administrator or VPC Administrator policy.
- After setting the parameters, click Submit. Determine whether to check for the backup of all indexes and click OK in the displayed dialog box.
- View the upgrade task in the task list. If the task status is Running, you can expand the task list and click View Progress to view the upgrade progress.
Figure 1 Viewing the upgrade progress
If the task status is Failed, you can retry or terminate the task.
- Retry the task: Click Retry in the Operation column.
- Terminate the task: Click Terminate in the Operation column.
- Same version upgrade: If the upgrade task status is Failed, you can terminate the upgrade task.
- Cross version upgrade: You can stop an upgrade task only when the task status is Failed and no node has been upgraded.
After an upgrade task is terminated, the Task Status of the cluster is rolled back to the status before the upgrade, and other tasks in the cluster are not affected.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.