MRS 3.2.0-LTS.1 Patch Description
Basic Information About MRS 3.2.0-LTS.1.6
Patch Version |
MRS 3.2.0-LTS.1.6 |
---|---|
Release Date |
2024-02-04 |
Pre-Installation Operations |
If an MRS cluster node is faulty or the network is disconnected, isolate the node first. Otherwise, the patch installation will fail. |
New Features |
|
Resolved Issues |
List of issues resolved in MRS 3.2.0-LTS.1.6:
|
Compatibility with Other Patches |
The MRS 3.2.0-LTS.1.6 patch package contains all patches for fixing every single issue of MRS 3.2.0-LTS.1. |
Impact of Patch Installation |
Impact of Patch Installation
- If you need to add a service after installing a patch for MRS 3.2.0-LTS.1, uninstall the patch, add the service, and reinstall the patch.
- After the patch is installed for MRS 3.2.0-LTS.1, do not reinstall hosts or software on the management plane.
- After the patch is installed for MRS 3.2.0-LTS.1, if the IoTDB component is installed in the cluster, you need to disable the metric reporting function of the component when interconnecting with CES.
- After the patch is installed for MRS 3.2.0-LTS.1, you also need to upgrade the client that is downloaded and installed again.
- During MRS 3.2.0-LTS.1 patch installation, OMS automatically restarts, which affects cluster management operations, such as job submission and cluster scaling. Install patch in off-peak hours.
- After the patch is installed for MRS 3.2.0-LTS.1, the steps for upgrading the client and upgrading the ZIP package cannot be skipped. Otherwise, the patches of components such as Spark, HDFS, and Flink cannot be used, and jobs submitted on the Spark client will fail to run.
- After the MRS 3.2.0-LTS.1.6 patch is installed or uninstalled, restart the Flink, YARN, HDFS, MapReduce, Ranger, HetuEngine, Flume, Hive, Kafka, and Spark2x services on FusionInsight Manager to apply the patch. During restart, some services may be unavailable for a short period. To ensure service continuity, restart the components in off-peak hours. Before uninstalling the patch, log in to FusionInsight Manager, and choose System > Third-Party AD to disable AD interconnection.

When upgrading from MRS 3.2.0-LTS.1.4 to MRS 3.2.0-LTS.1.5 by applying patches, you only need to restart the components that are patched in MRS 3.2.0-LTS.1.5. However, if you are upgrading across versions, you need to restart all components that are patched in the cumulative patches.
- If a client is manually installed inside or outside the cluster, you need to upgrade or roll back the client.
- Log in to the active node of the cluster.
cd /opt/Bigdata/patches/{Patch version}/download/
In all operations, replace {Patch version} with that used in the real-life project. For example, if the installed patch is MRS_3.2.0-LTS.1.1, the value of {Patch version} is MRS_3.2.0-LTS.1.1.
- Copy the patch installation package to the /opt/ directory on the client node.
scp patch.tar.gz {IP address of the client node}:/opt/
Example:
scp patch.tar.gz 127.0.0.1:/opt/
- Log in to the node where the client is deployed.
ssh 127.0.0.1
- Run the following commands to create a patch directory and decompress the patch package:
tar -zxf /opt/patch.tar.gz -C /opt/{Patch version}
- Upgrade/Rollback a patch.
- Upgrade the patch on the client node.
Log in to the node where the client is deployed.
cd /opt/{Patch version}/client
sh upgrade_client.sh upgrade {Client installation directory}
Example:
sh upgrade_client.sh upgrade /opt/client/
- Roll back the patch on the client node (after the patch is uninstalled).
Log in to the node where the client is deployed.
cd /opt/{Patch version}/client
sh upgrade_client.sh rollback {Client installation directory}
The following shows an example.
sh upgrade_client.sh rollback /opt/client/
- Upgrade the patch on the client node.
- Log in to the active node of the cluster.
- If the Spark service is installed on MRS 3.2.0-LTS.1, upgrade the ZIP package in HDFS on the active OMS node after the patch is installed.
- Log in to the active node of the cluster.
cd /opt/Bigdata/patches/{Patch version}/client/
source /opt/Bigdata/client/bigdata_env
- Authenticate users who have permissions on HDFS on a cluster in security mode.
- Upgrade the package in HDFS.
- (Optional) Roll back upgrade after the patch is uninstalled.
- Restart the JDBCServer2x instance of Spark on FusionInsight Manager.
- Log in to the active node of the cluster.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.