MRS 2.1.0.10 Patch Description
Basic Information
Patch Version |
MRS 2.1.0.10 |
---|---|
Release Date |
2020-09-21 |
Resolved Issues |
List of resolved issues in MRS 2.1.0.10: MRS Manager New queue configurations in the capacity-schedule.xml file will not be lost during cluster scale-out after the patch is installed. Full-link monitoring can be rolled back. Big data components Hive permission assignment failure on Spark is resolved. If no queue is specified, tasks are submitted to the launcher-job queue by default. Task running will not be affected. |
List of resolved issues in MRS 2.1.0.9: MRS Manager The MRS Executor memory overflow is resolved. Optimized the cluster scale-out process. The problem that the SQL statement is incorrectly combined when the value of SparkSQL contains spaces is resolved. The problem that HiveSQL jobs fail to be submitted occasionally is resolved. The permission control for downloading the keytab file is optimized. Big data components When the Presto role name contains uppercase letters, the permission model can take effect. The problem that Hive partitions are deleted slowly is resolved. The problem that the token expires after Spark runs for a long time is resolved. |
|
List of resolved issues in MRS 2.1.0.8: MRS Manager The problem that the ECS API traffic is limited when OBS is accessed through an agency has been solved. Multiple users can log in to MRS Manager at the same time. Full-link monitoring is supported. MRS big data components Carbon 2.0 has been upgraded. The HBASE-18484 issue has been solved. |
|
List of the resolved issues in MRS 2.1.0.7: MRS Manager The problem that data and files are displayed incorrectly if a field contains a newline character in the DLF+Presto query has been solved. The Presto query result can be saved as a file. |
|
List of resolved issues in MRS 2.1.0.6: MRS Manager The problem that the disk I/O usage of monitoring data is inaccurate has been solved. The problem that the Spark job status is not updated occasionally has been solved. The problem that the job running failure has been solved. The patch mechanism has been optimized. MRS big data components The HBase exceptions are rectified. The problem that the system responds slowly when Hive roles are bound to permissions has been solved. |
|
List of resolved issues in MRS 2.1.0.5: MRS big data components Impala supports the ObsFileSystem function. The timeout period of the MRS Manager page and the native pages of components can be configured. The Hive privilege binding freezing problem has been solved. The data connection failure has been solved. |
|
List of resolved issues in MRS 2.1.0.3: MRS Manager Problems of Manager executor's high concurrent job submission have been solved. MRS big data components Data insertion failure in hive on tez has been fixed. |
|
List of the resolved issues in MRS 2.1.0.2: MRS Manager No monitoring information is displayed after NodeAgent is restarted. When a job is under submission for a long time, memory overflow occurs in the manager executor process. Job submission is supported. manager executor can be used to configure high concurrency. New Kafka topics are not displayed on the MRS Manager management plane. When you call security cluster's APIs to submit the Spark Submit job and perform operations on an HBase table, the permission control on the HBase table does not take effect. The MRS Manager patch mechanism has been optimized. MRS big data components The slow running of the load data inpath command executed by Spark has been optimized. Column names containing the dollar sign ($) can be used in Spark table creation. OBS-related problems have been solved. |
|
List of the resolved issues in MRS 2.1.0.1: MRS Manager The return results of Hive SQL statements submitted by V2 jobs have been optimized, and the issue that V2 jobs fail to be submitted using an agency token has been solved. MRS big data components HiveServer out of memory (OOM) has been solved for MRS Hive: HIVE-10970 and HIVE-22275. |
|
Compatibility with Other Patches |
The MRS 2.1.0.10 patch package contains all patches released for MRS 2.1.0. |
Vulnerability Disclosure |
The remote code execution vulnerability of the Spark has been fixed. For details about the vulnerability, see CVE-2020-9480. |
Impact of Patch Installation
- During the installation of the MRS 2.1.0.10 patch, MRS Manager will be restarted, and the components such as Hive, Impala, Spark, HDFS, YARN, MapReduce, Presto, HBase, Tez, and related dependent services will be restarted in rolling mode. During the restart of MRS Manager, services are temporarily unavailable but services are not interrupted during the rolling restart.
- After installing the MRS 2.1.0.10 patch, you need to download and install all clients again, including the original clients of Master nodes and the clients used by other nodes of VPC (that is, the clients that you set up).
- For how to fully update the original client of the active and standby master nodes, see Updating Client Configurations (Version 2.x or Earlier).
- For details about how to fully install the clients you set up, see Installing a Client (MRS 2.x or Earlier).
- You are advised to back up the old clients before reinstalling the new ones.
- If you have modified client configurations based on the service scenario, modify them again after reinstalling the clients.
- (Optional) The timeout interval of the MRS Manager page and the native page of the component can be configured. You need to manually modify the following configuration:
- Change the session timeout interval of the web and CAS services on all Master nodes.
- Change the value of <session-timeout>20</session-timeout> in /opt/Bigdata/tomcat/webapps/cas/WEB-INF/web.xml. The unit is minute.
- Change the value of <session-timeout>20</session-timeout> in /opt/Bigdata/tomcat/webapps/web/WEB-INF/web.xml. The unit is minute.
- Change the TGT validity period of the CAS on all Master nodes.
Change 1200 in p:maxTimeToLiveInSeconds="${tgt.maxTimeToLiveInSeconds:1200} and p:timeToKillInSeconds="${tgt.timeToKillInSeconds:1200}" in /opt/Bigdata/tomcat/webapps/cas/WEB-INF/spring-configuration/ticketExpirationPolicies.xml to the corresponding timeout interval, in seconds.
- Restart the Tomcat service on the active Master node.
- On the active Master node, run the netstat -anp |grep 28443 |grep LISTEN command as user omm to query the Tomcat process ID.
- Run the kill -9 {pid} command, in which {pid} indicates the process ID obtained in the previous step.
- Wait for the process to automatically restart. You can run the netstat -anp |grep 28443 |grep LISTEN command to check whether the process is started. If the command output is displayed, the process is started successfully.
- Add or modify configuration items for each component. The values of the configuration items are the same as the timeout interval, in seconds.
- HDFS/MapReduce/YARN: Add the custom configuration item http.server.session.timeout.secs.
- Spark: Change the value of spark.session.maxAge.
- Hive: Add the customized configuration item http.server.session.timeout.secs.
When saving the configuration items, you can choose not to restart the affected services or instances. Restart the services or instances when the service is not busy.
- Change the session timeout interval of the web and CAS services on all Master nodes.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot