ClickHouse Log Overview
Logs of MRS 3.2.0 and Later Versions
Log path: ClickHouse logs are stored in ${BIGDATA_LOG_HOME}/clickhouse by default.
- ClickHouse run logs: /var/log/Bigdata/clickhouse/clickhouseServer/ *.log
- Balancer run logs: /var/log/Bigdata/clickhouse/balance/*.log
- Data migration logs: /var/log/Bigdata/clickhouse/migration/${task_name}/clickhouse-copier_{timestamp}_{processId}/copier.log
- ClickHouse audit logs: /var/log/Bigdata/audit/clickhouse/clickhouse-server.audit.log
Log archiving rules:
- ClickHouse compresses and archives logs. By default, when the size of a log file exceeds 100 MB, the log file will be compressed.
- The file generated after log files are compressed is named in the format of <Original log name>.[ID].gz.
- A maximum of 10 latest compressed files are reserved by default. The number of compressed files can be configured on Manager.
Log Type |
Log File Name |
Description |
---|---|---|
ClickHouse log |
/var/log/Bigdata/clickhouse/clickhouseServer/clickhouse-server.err.log |
Path of ClickHouseServer error log files |
/var/log/Bigdata/clickhouse/clickhouseServer/checkService.log |
Path of key ClickHouseServer run log files |
|
/var/log/Bigdata/clickhouse/clickhouseServer/clickhouse-server.log |
||
/var/log/Bigdata/clickhouse/clickhouseServer/ugsync.log |
User role synchronization tool log |
|
/var/log/Bigdata/clickhouse/clickhouseServer/prestart.log |
ClickHouse prestart log |
|
/var/log/Bigdata/clickhouse/clickhouseServer/start.log |
ClickHouse startup log |
|
/var/log/Bigdata/clickhouse/clickhouseServer/checkServiceHealthCheck.log |
ClickHouse health check log |
|
/var/log/Bigdata/clickhouse/clickhouseServer/checkugsync.log |
User role synchronization check log |
|
/var/log/Bigdata/clickhouse/clickhouseServer/checkDisk.log |
Path of ClickHouse disk check log files |
|
/var/log/Bigdata/clickhouse/clickhouseServer/backup.log |
Path of log files generated when ClickHouse performs the backup and restoration operations on Manager |
|
/var/log/Bigdata/clickhouse/clickhouseServer/stop.log |
ClickHouse stop log |
|
/var/log/Bigdata/clickhouse/clickhouseServer/postinstall.log |
postinstall.sh script invoking log of ClickHouse |
|
/var/log/Bigdata/clickhouse/balance/start.log |
Path of ClickHouseBalancer startup log files |
|
/var/log/Bigdata/clickhouse/balance/error.log |
Path of ClickHouseBalancer error log files |
|
/var/log/Bigdata/clickhouse/balance/access_http.log |
Path of the HTTP log files generated during ClickHouseBalancer running |
|
/var/log/Bigdata/clickhouse/balance/access_tcp.log |
Path of the TCP log files generated during ClickHouseBalancer running |
|
/var/log/Bigdata/clickhouse/balance/checkService.log |
ClickHouseBalancer service check log |
|
/var/log/Bigdata/clickhouse/balance/postinstall.log |
Invoking log of the postinstall.sh script of ClickHouseBalcer |
|
/var/log/Bigdata/clickhouse/balance/prestart.log |
Path of prestart log files of ClickHouseBalancer |
|
/var/log/Bigdata/clickhouse/balance/stop.log |
Path of stop log files of ClickHouseBalancer |
|
/var/log/coredump/clickhouse-*.core.gz |
Compressed package of memory dump files generated after the ClickHouse process breaks down This log is available in MRS 3.3.0 or later versions only. |
|
Data migration log |
/var/log/Bigdata/clickhouse/migration/Data migration task name/clickhouse-copier_{timestamp}_{processId}/copier.log |
Run log generated when you use the migration tool by referring to Migrating Data Between ClickHouseServer Nodes in a Cluster |
/var/log/Bigdata/clickhouse/migration/Data migration task name/clickhouse-copier_{timestamp}_{processId}/copier.err.log |
Error log generated when you use the migration tool by referring to Migrating Data Between ClickHouseServer Nodes in a Cluster |
|
clickhouse-tomcat log |
/var/log/Bigdata/tomcat/clickhouse/web_clickhouse.log |
ClickHouse custom UI run log |
/var/log/Bigdata/tomcat/audit/clickhouse/clickhouse_web_audit.log |
Clickhouse data migration audit log |
|
ClickHouse audit log |
/var/log/Bigdata/audit/clickhouse/clickhouse-server-audit.log |
Path of ClickHouse audit log files |
Logs of MRS 3.2.0 and Earlier Versions
Log path: The default storage path of ClickHouse log files is as follows: ${BIGDATA_LOG_HOME}/clickhouse
Log archive rule: The automatic log compression and archiving function are enabled. By default, when the size of logs exceeds 100 MB, logs are compressed into a log file named in the format of <Original log file name>.[No.].gz. A maximum of 10 latest compressed files are reserved by default. The number of compressed files can be configured on Manager.
Log Type |
Log File Name |
Description |
---|---|---|
Run log |
/var/log/Bigdata/clickhouse/clickhouseServer/clickhouse-server.err.log |
Path of ClickHouseServer error log files |
/var/log/Bigdata/clickhouse/clickhouseServer/checkService.log |
Path of key ClickHouseServer run log files |
|
/var/log/Bigdata/clickhouse/clickhouseServer/clickhouse-server.log |
||
/var/log/Bigdata/clickhouse/balance/start.log |
Path of ClickHouseBalancer startup log files |
|
/var/log/Bigdata/clickhouse/balance/error.log |
Path of ClickHouseBalancer error log files |
|
/var/log/Bigdata/clickhouse/balance/access_http.log |
Path of ClickHouseBalancer run log files |
|
Data migration log |
/var/log/Bigdata/clickhouse/migration/Data migration task name/clickhouse-copier_{timestamp}_{processId}/copier.log |
Run log generated when you use the migration tool by referring to Migrating Data Between ClickHouseServer Nodes in a Cluster |
/var/log/Bigdata/clickhouse/migration/Data migration task name/clickhouse-copier_{timestamp}_{processId}/copier.err.log |
Error log generated when you use the migration tool by referring to Migrating Data Between ClickHouseServer Nodes in a Cluster |
Log Level
Table 3 describes the log levels supported by ClickHouse.
Levels of run logs are error, warning, trace, information, and debug from the highest to the lowest priority. Run logs of equal or higher levels are recorded. The higher the specified log level, the fewer the logs recorded.
Level |
Description |
---|---|
error |
Logs of this level record error information about system running |
warning |
Logs of this level record exception information about the current event processing |
trace |
Logs of this level record trace information about the current event processing |
information |
Logs of this level record normal running status information about the system and events |
debug |
Logs of this level record system running and debugging information |
To modify log levels, perform the following operations:
- Log in to FusionInsight Manager.
- Choose Cluster > Services > ClickHouse > Configurations.
- Select All Configurations.
- On the menu bar on the left, select the log menu of the target role.
- Select a desired log level.
- Click Save. Then, click OK.
The configurations take effect immediately without the need to restart the service.
Log Format
The following table lists the ClickHouse log format:
Log Type |
Format |
Example |
---|---|---|
ClickHouse run log |
<yyyy-MM-dd HH:mm:ss,SSS>|<Log level>|<Name of the thread that generates the log>|<Message in the log>|<Location where the log event occurs> |
2021.02.23 15:26:30.691301 [ 6085 ] {} <Error> DynamicQueryHandler: Code: 516, e.displayText() = DB::Exception: default: Authentication failed: password is incorrect or there is no user with such name, Stack trace (when copying this message, always include the lines below): 0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x1250e59c |
clickhouse-tomcat run log |
<yyyy-MM-dd HH:mm:ss,SSS>|<Log level>|<Name of the thread that generates the log>|<Message in the log>|<Location where the log event occurs> |
2022-08-16 12:55:12,109 | INFO | pool-7-thread-1 | zookeeper is secure. | com.huawei.bigdata.om.extui.clickhouse.service.impl.QueryServiceImpl.initAuthContext(QueryServiceImpl.java:136) |
Data migration log |
<yyyy-MM-dd HH:mm:ss,SSS>|<Log level>|<Name of the thread that generates the log>|<Message in the log>|<Location where the log event occurs> |
2022.08.07 14:41:01.814235 [ 28651 ] {} <Debug> ClusterCopier: Task /clickhouse/copier_tasks/TEST0807_02/tables/dblv85.startsea_zh_imoriginck_new/20201031/piece_4/shards/1 has been successfully executed by 8%2D5%2D226%2D156#20220807124849_28651 |
Audit log |
<yyyy-MM-dd HH:mm:ss,SSS>|query id|<Log Level><Name of the thread that generates the log>|<Message in the log>|<Location where the log event occurs> |
2022.08.16 20:58:16.723643 [ 11382 ] {cc9554b6-8a26-42e9-8ab8-d848500544e6} <Information> executeQuery_audit [executeQuery.cpp:202] : (0 from 192.168.64.81:45204, user: clickhouse, using experimental parser) select shard_num, host_name, host_address from system.clusters format JSON |
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot