Loader Log Overview
Log Description
Log path: The default storage path of Loader log files is /var/log/Bigdata/loader/Log category.
- runlog: /var/log/Bigdata/loader/runlog (run logs)
- scriptlog: /var/log/Bigdata/loader/scriptlog/ (script execution logs)
- catalina: /var/log/Bigdata/loader/catalina (Tomcat startup and stop logs)
- audit: /var/log/Bigdata/loader/audit (audit logs)
Log archive rule:
The automatic compression and archiving function are enabled for Loader run logs and audit logs. By default, when the size of a log file exceeds 10 MB, the log file is automatically compressed into a log file named in the following rule: <Original log file name>-<yyyy-mm-dd_hh-mm-ss>.[ID].log.zip. A maximum of 20 latest compressed files are reserved. The number of compressed files can be configured on the Manager portal.
Log Type |
Log File Name |
Description |
---|---|---|
Run log |
loader.log |
Loader system log file that records most of the logs generated when the TelcoFS system is running. |
loader-omm-***-pid***-gc.log.*.current |
Loader process GC log file |
|
sqoopInstanceCheck.log |
Loader instance health check log file |
|
Audit log |
default.audit |
Loader operation audit log file that records operations such as adding, deleting, modifying, and querying jobs and user login |
Tomcat log |
catalina.out |
Tomcat run log file. |
catalina. <yyyy-mm-dd >.log |
Tomcat run log file |
|
host-manager. <yyyy-mm-dd >.log |
Tomcat run log file |
|
localhost_access_log. <yyyy-mm-dd >.txt |
Tomcat run log file |
|
manager <yyyy-mm-dd >.log |
Tomcat run log file |
|
localhost. <yyyy-mm-dd >.log |
Tomcat run log file |
|
Script log |
postInstall.log |
Loader installation script log file Log file generated during the execution of the Loader installation script (postInstall.sh) |
preStart.log |
Pre-startup script log file of the Loader service During startup of the Loader service, a series of preparation operations are first performed (by executing preStart.sh), such as generating the keytab file. This log file records information about these operations |
|
loader_ctl.log |
Log file generated when Loader executes the service start and stop script (sqoop.sh) |
Log Level
Table 2 describes the log levels provided by Loader. The priorities of log levels are ERROR, WARN, INFO, and DEBUG in descending order. Logs whose levels are higher than or equal to the specified level are printed. The number of printed logs decreases as the specified log level increases.
Level |
Description |
---|---|
ERROR |
Error information about the current event processing. |
WARN |
Exception information about the current event processing. |
INFO |
Normal running status information about the system and events. |
DEBUG |
System information and system debugging information. |
To modify log levels, perform the following operations:
- Go to the All Configurations page of Loader by referring to Modifying Cluster Service Configuration Parameters.
- On the menu bar on the left, select the log menu of the target role.
- Select a desired log level.
- Save the configuration. In the dialog box that is displayed, click OK. Then restart the service for the configuration to take effect.
Log Formats
The following table lists the Loader log formats.
Log Type |
Format |
Example |
---|---|---|
Run log |
<yyyy-MM-dd HH:mm:ss,SSS>|<Log Level>|<Thread that generates the log>|<Message in the log>|<Location of the log event> |
2015-06-29 14:54:35,553 | INFO | [localhost-startStop-1] | ConnectionRequestHandler initialized | org.apache.sqoop.handler.ConnectionRequestHandler.<init>(ConnectionRequestHandler.java:100) |
Audit log |
<yyyy-MM-dd HH:mm:ss,SSS>|<Log Level>|default|<Message in the log>|<Location of the log event> |
2015-06-29 15:35:40,969 INFO default: UserName=admin, UserIP=10.52.0.111, Time=2015-06-29 15:35:40,969, Operation=submit, Resource=submission@21, Result=Failure, Detail={[reason:GET_SFTP_SESSION_FAILED:Failed to get sftp session - 10.162.0.35 (caused by: Auth cancel) ];[config:null]} |
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.