Configuring YARN Big Job Scanning
YARN's big job scanning function monitors local temporary files (such as shuffle files) and key HDFS directories (OBS is not supported) for Hive, HetuEngine, and Spark jobs. It reports events when jobs consume excessive storage resources (local disks or key HDFS directories).
This section applies only to MRS 3.5.0 and later versions.
For details about the monitored HDFS directories, see Table 1.
Component |
Monitored HDFS directories |
Threshold |
---|---|---|
Hive |
hdfs://hacluster/tmp/hive-scratch/*/ |
400G |
Hetu |
hdfs://hacluster/hetuserverhistory/*/coordinator/ |
100G |
Spark |
hdfs://hacluster/sparkJobHistory/ |
100G |
For details about the parameters, see Table 2.
Go to the All Configurations page of YARN and enter a parameter name in the search box by referring to Modifying Cluster Service Configuration Parameters.
- To activate the Hive component configuration in the big job scanning feature, set hive-ext.record.mr.applicationid to true. Here are the steps you need to take:
Go to the All Configurations page of the Hive service by referring to Modifying Cluster Service Configuration Parameters, choose HiveServer (Role) > Custom, and add hive-ext.record.mr.applicationid in the hive.server.customized.configs parameter. Set the added parameter to true and save the configuration.
- Currently, the Hive large job scanning feature applies only to the MapReduce engine.
Parameter |
Description |
Default Value |
---|---|---|
hetu.job.hdfs.monitor.dir |
Large directory monitoring path of HetuEngine jobs. The root directory cannot be monitored. If the directories to be monitored include variable directories such as user directories, replace them with /*/. |
hdfs://hacluster/hetuserverhistory/*/coordinator/ |
hetu.job.appId.parser.rule |
Rule for extracting job IDs in the big directory monitoring path of HetuEngine jobs. Examples:
|
{appid} |
hetu.job.hdfs.dir.threshold |
Large directory threshold of HetuEngine jobs. If the threshold is exceeded, an event is reported. Unit: GB |
100 |
hive.job.hdfs.monitor.dir |
Big directory monitoring path of Hive jobs. The root directory cannot be monitored. If the directories to be monitored include variable directories such as user directories, replace them with /*/. |
hdfs://hacluster/tmp/hive-scratch/*/ |
hive.job.appId.parser.rule |
Rule for extracting job IDs in the big directory monitoring path of Hive jobs. Some examples are as follows:
|
{subdir}/{appid} |
hive.job.hdfs.dir.threshold |
Big directory threshold for monitoring Hive jobs. If the threshold is exceeded, an event is reported. Unit: GB |
400 |
spark.job.hdfs.monitor.dir |
Big directory monitoring path for monitoring Spark jobs. The root directory cannot be monitored. If the directories to be monitored include variable directories such as user directories, replace them with /*/. |
hdfs://hacluster/sparkJobHistory/ |
spark.job.appId.parser.rule |
Rule for extracting the job ID in the large directory monitoring path of the monitored Spark job. For example:
|
{appid} |
spark.job.hdfs.dir.threshold |
Large directory threshold for monitoring Spark jobs. If the threshold is exceeded, an event is reported. Unit: GB |
100 |
job.monitor.local.thread.pool |
Number of threads for obtaining information about big jobs monitored by NodeManager. |
50 |
max.job.count |
Number of big jobs displayed in the event. |
10 |
job.monitor.local.dir.threshold |
Threshold of the size of the job directory on NodeManager's local disk. An event will be triggered once this threshold is reached. Unit: GB |
20 |
job.monitor.check.period |
Big job monitoring period. Setting the value to 0 disables big job monitoring. Unit: minute |
10 |
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot