Configuring Container Log Aggregation
Scenario
- After the application is complete, collect container logs to HDFS at a time.
- During application running, periodically collect log segments generated by containers and save them to HDFS.
Configuration Description
Navigation path for setting parameters:
Go to the All Configurations tab page of YARN, enter the parameters listed in Table 1 in the search box, modify the parameters by referring to Modifying Cluster Service Configuration Parameters, and save the configuration. On the Dashboard tab page, choose More > Synchronize Configuration. After the synchronization is complete, restart the YARN service.
The yarn.nodemanager.remote-app-log-dir-suffix parameter must be configured on the Yarn client. The configurations on the ResourceManager, NodeManager, and JobHistory nodes must be the same as those on the Yarn client.
The periodic log collection function applies only to MapReduce applications, for which rolling output of log files must be configured. Table 3 describes the configurations in the Client installation path/Yarn/config/mapred-site.xml configuration file on the MapReduce client node.
Parameter |
Description |
Default Value |
---|---|---|
yarn.log-aggregation-enable |
Whether to enable container log aggregation
After changing the parameter value, restart the yarn service for the setting to take effect.
NOTE:
|
true |
yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds |
Interval for NodeManager to periodically collect logs
Interval for NodeManager to wake up and upload logs. If this parameter is set to -1 or 0, rolling monitoring is disabled and logs are aggregated when the application task is complete. The value must be greater than or equal to -1. |
-1 |
yarn.nodemanager.disk-health-checker.log-dirs.max-disk-utilization-per-disk-percentage |
Maximum percentage of the Yarn disk quota that can be occupied by the container log directory on each disk. When the space occupied by the log directory exceeds the value of this parameter, the periodic log collection service is triggered to start a log collection activity beyond the period to release the local disk space. Maximum space for container logs that can be provided on each disk. If the disk space occupied by container logs exceeds this threshold, data aggregation in rolling mode is triggered.
NOTE:
|
25 |
yarn.nodemanager.remote-app-log-dir-suffix |
Name of the HDFS folder in which container logs are to be stored. This parameter and yarn.nodemanager.remote-app-log-dir form the full path for storing container logs. That is, {yarn.nodemanager.remote-app-log-dir}/${user}/{yarn.nodemanager.remote-app-log-dir-suffix}.
NOTE:
{user} indicates the username for running the task. |
logs |
yarn.nodemanager.log-aggregator.on-fail.remain-log-in-sec |
Duration for retaining container logs on the local host after the logs fail to be collected, in second
|
604800 |
Go to the All Configurations page of MapReduce and enter a parameter name in Table 2 in the search box by referring to Modifying Cluster Service Configuration Parameters.
Parameter |
Description |
Default Value |
---|---|---|
yarn.log-aggregation.retain-seconds |
Duration for retaining aggregated logs, in second
|
1296000 |
yarn.log-aggregation.retain-check-interval-seconds |
Interval for storing container logs in HDFS, in second
|
86400 |
Go to the All Configurations page of Yarn and enter a parameter name list in Table 3 in the search box by referring to Modifying Cluster Service Configuration Parameters.
Parameter |
Description |
Default Value |
---|---|---|
mapreduce.task.userlog.limit.kb |
Maximum size of a single task log file of the MapReduce application. When the maximum size of the log file has been reached, a new log file is generated. The value 0 indicates that the size of the log file is not limited. |
51200 |
yarn.app.mapreduce.task.container.log.backups |
Maximum number of task logs that can be retained for the MapReduce application. If this parameter is set to 0, rolling output is disabled. Number of task log backup files when ContainerRollingLogAppender (CRLA) is used. By default, ContainerLogAppender (CLA) is used and container logs are not rolled back. When both mapreduce.task.userlog.limit.kb and yarn.app.mapreduce.task.container.log.backups are greater than 0, CRLA is enabled. The value ranges from 0 to 999. |
10 |
yarn.app.mapreduce.am.container.log.limit.kb |
Maximum size of a single ApplicationMaster log file of the MapReduce application, in KB. When the maximum size of the log file has been reached, a new log file is generated. The value 0 indicates that the size of a single ApplicationMaster log file is not limited. |
51200 |
yarn.app.mapreduce.am.container.log.backups |
Maximum number of ApplicationMaster logs that can be retained for the MapReduce application. If this parameter is set to 0, rolling output is disabled. Number of ApplicationMaster log backup files when CRLA is used. By default, CLA is used and container logs are not rolled back. When both yarn.app.mapreduce.am.container.log.limit.kb and yarn.app.mapreduce.am.container.log.backups are greater than 0, CRLA is enabled for the ApplicationMaster. The value ranges from 0 to 999. |
20 |
yarn.app.mapreduce.shuffle.log.backups |
Maximum number of shuffle logs that can be retained for the MapReduce application. If this parameter is set to 0, rolling output is disabled. When both yarn.app.mapreduce.shuffle.log.limit.kb and yarn.app.mapreduce.shuffle.log.backups are greater than 0, syslog.shuffle uses CRLA. The value ranges from 0 to 999. |
10 |
yarn.app.mapreduce.shuffle.log.limit.kb |
Maximum size of a single shuffle log file of the MapReduce application, in KB. When the maximum size of the log file has been reached, a new log file is generated. If this parameter is set to 0, the size of a single shuffle log file is not limited. The value must be greater than or equal to 0. |
51200 |
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.