Introduction to HDFS Logs
Log Description
Log path: The default path of HDFS logs is /var/log/Bigdata/hdfs/Role name.
- NameNode: /var/log/Bigdata/hdfs/nn (run logs) and /var/log/Bigdata/audit/hdfs/nn (audit logs)
- DataNode: /var/log/Bigdata/hdfs/dn (run logs) and /var/log/Bigdata/audit/hdfs/dn (audit logs)
- ZKFC: /var/log/Bigdata/hdfs/zkfc (run logs) and /var/log/Bigdata/audit/hdfs/zkfc (audit logs)
- JournalNode: /var/log/Bigdata/hdfs/jn (run logs) and /var/log/Bigdata/audit/hdfs/jn (audit logs)
- Router: /var/log/Bigdata/hdfs/router (run logs) and /var/log/Bigdata/audit/hdfs/router (audit logs)
- HttpFS: /var/log/Bigdata/hdfs/httpfs (run logs) and /var/log/Bigdata/audit/hdfs/httpfs (audit logs)
Log archive rule: The automatic HDFS log compression function is enabled. By default, when the size of logs exceeds 100 MB, logs are automatically compressed into a log file named in the following format: <Original log file name>-<yyyy-mm-dd_hh-mm-ss.[ID].log.zip. A maximum of 100 latest compressed files are reserved. The number of compressed files can be configured on Manager.
Type |
Name |
Description |
---|---|---|
Run log |
hadoop-<SSH_USER>-<process_name>-<hostname>.log |
HDFS system log, which records most of the logs generated when the HDFS system is running. |
hadoop-<SSH_USER>-<process_name>-<hostname>.out |
Log that records the HDFS running environment information. |
|
hadoop.log |
Log that records the operation of the Hadoop client. |
|
hdfs-period-check.log |
Log that records scripts that are executed periodically, including automatic balancing, data migration, and JournalNode data synchronization detection. |
|
<process_name>-<SSH_USER>-<DATE>-<PID>-gc.log |
Garbage collection log file |
|
postinstallDetail.log |
Work log before the HDFS service startup and after the installation. |
|
hdfs-service-check.log |
Log that records whether the HDFS service starts successfully. |
|
hdfs-set-storage-policy.log |
Log that records the HDFS data storage policies. |
|
cleanupDetail.log |
Log that records the cleanup logs about the uninstallation of the HDFS service. |
|
prestartDetail.log |
Log that records cluster operations before the HDFS service startup. |
|
hdfs-recover-fsimage.log |
Recovery log of the NameNode metadata. |
|
datanode-disk-check.log |
Log that records the disk status check during the cluster installation and use. |
|
hdfs-availability-check.log |
Log that check whether the HDFS service is available. |
|
hdfs-backup-fsimage.log |
Backup log of the NameNode metadata. |
|
startDetail.log |
Detailed log that records the HDFS service startup. |
|
hdfs-blockplacement.log |
Log that records the placement policy of HDFS blocks. |
|
upgradeDetail.log |
Upgrade logs. |
|
hdfs-clean-acls-java.log |
Log that records the clearing of deleted roles' ACL information by HDFS. |
|
hdfs-haCheck.log |
Run log that checks whether the NameNode in active or standby state has obtained scripts. |
|
<process_name>-jvmpause.log |
Log that records JVM pauses during process running. |
|
hadoop-<SSH_USER>-balancer-<hostname>.log |
Run log of HDFS automatic balancing. |
|
hadoop-<SSH_USER>-balancer-<hostname>.out |
Log that records information of the environment where HDFS executes automatic balancing. |
|
hdfs-switch-namenode.log |
Run log that records the HDFS active/standby switchover. |
|
hdfs-router-admin.log |
Run log of the mount table management operation |
|
threadDump-<DATE>.log |
Instance process stack log |
|
Tomcat logs |
hadoop-omm-host1.out, httpfs-catalina.<DATE>.log, httpfs-host-manager.<DATE>.log, httpfs-localhost.<DATE>.log, httpfs-manager.<DATE>.log, localhost_access_web_log.log |
Tomcat run log |
Audit log |
hdfs-audit-<process_name>.log ranger-plugin-audit.log |
Audit log that records the HDFS operations (such as creating, deleting, modifying and querying files). |
SecurityAuth.audit |
HDFS security audit log. |
Log Level
Table 2 lists the log levels supported by HDFS. The log levels include FATAL, ERROR, WARN, INFO, and DEBUG. Logs of which the levels are higher than or equal to the set level will be printed by programs. The higher the log level is set, the fewer the logs are recorded.
Level |
Description |
---|---|
FATAL |
Indicates the critical error information about system running. |
ERROR |
Indicates the error information about system running. |
WARN |
Indicates that the current event processing exists exceptions. |
INFO |
Indicates that the system and events are running properly. |
DEBUG |
Indicates the system and system debugging information. |
To modify log levels, perform the following operations:
- Go to the All Configurations page of HDFS by referring to Modifying Cluster Service Configuration Parameters.
- On the left menu bar, select the log menu of the target role.
- Select a desired log level.
- Save the configuration. In the displayed dialog box, click OK to make the configurations take effect.
The configurations take effect immediately without restarting the service.
Log Formats
The following table lists the HDFS log formats.
Type |
Format |
Example |
---|---|---|
Run log |
<yyyy-MM-dd HH:mm:ss,SSS>|<Log level>|<Name of the thread that generates the log>|<Message in the log>|<Location where the log event occurs> |
2015-01-26 18:43:42,840 | INFO | IPC Server handler 40 on 8020 | Rolling edit logs | org.apache.hadoop.hdfs.server.namenode.FSEditLog.rollEditLog(FSEditLog.java:1096) |
Audit log |
<yyyy-MM-dd HH:mm:ss,SSS>|<Log level>|<Name of the thread that generates the log>|<Message in the log>|<Location where the log event occurs> |
2015-01-26 18:44:42,607 | INFO | IPC Server handler 32 on 8020 | allowed=true ugi=hbase (auth:SIMPLE) ip=/10.177.112.145 cmd=getfileinfo src=/hbase/WALs/hghoulaslx410,16020,1421743096083/hghoulaslx410%2C16020%2C1421743096083.1422268722795 dst=null perm=null | org.apache.hadoop.hdfs.server.namenode.FSNamesystem$DefaultAuditLogger.logAuditMessage(FSNamesystem.java:7950) |
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot