Table of Audit Logs
This function enables direct SQL statement execution for viewing and analyzing audit logs, eliminating the need to manually collect and analyze FE audit log files to check service volume and types. You can periodically import FE audit logs into a specified Doris table using Stream Load. The audit log table function is disabled by default. You can use the enable_audit_log_table parameter to enable this function.
This function is available only for MRS 3.5.0 and later versions.
- Currently, the audit log table does not record operations such as Broker Load, Export, and Stream Load. You need to view the operations in the audit log.
- To record the Stream Load operation with the audit log table, set the BE parameter enable_stream_load_record to true.
- The audit log table records only information about SQL statements that have been executed.
- By default, the maximum interval for writing data to the audit log table is 60 seconds, and the maximum amount of data that can be written in each batch is 50 MB. You can adjust the interval and amount by setting the max_batch_interval_sec and max_batch_size parameters.
Enabling Audit Log Table
- Log in to FusionInsight Manager, choose Cluster > Services > Doris > Configurations > All Configurations > FE(Role) > Log, and change the value of enable_audit_log_table to true.
- Click Save, and then click OK to save the configurations.
- Click Instances, select the affected FE instances, choose More > Restart Instance. Enter the password of the user and click OK to apply the configurations.
- After the MySQL client is connected to Doris (for details, see Getting Started with Doris), run the following command to view the SQL statements that have been executed by Doris:
select * from __internal_schema.doris_audit_log_tbl__ limit 1;
Figure 1 SQL statements that have been executed
For details about each field, see Table 1.
Fields in the Audit Log Table
Field |
Type |
Description |
---|---|---|
query_id |
varchar(48) |
Query task ID. |
time |
datetime |
Query start time. |
client_ip |
varchar(200) |
IP address and port of the client. |
user |
varchar(64) |
Name of the user who performs the query. |
catalog |
varchar(128) |
Name of the catalog to which the query belongs. |
db |
varchar(96) |
Name of the database to which the query belongs. |
state |
varchar(8) |
Status of the query result. |
error_code |
int |
Error code if the query fails |
error_message |
string |
Error message if the query fails |
query_time |
bigint |
Query execution time, in ms. |
scan_bytes |
bigint |
Total number of bytes scanned by the query. |
scan_rows |
bigint |
Total number of rows scanned by the query. |
return_rows |
bigint |
Number of rows returned in the query result. |
stmt_id |
int |
Serially numbered statement ID. |
is_query |
tinyint |
Whether the statement is a query statement.
|
frontend_ip |
varchar(200) |
IP address of the FE that executes the query. |
cpu_time_ms |
bigint |
Total CPU time of the query. |
sql_hash |
varchar(48) |
Hash value of the query. |
sql_digest |
varchar(48) |
SQL summary. The value is empty for non-slow queries. |
peak_memory_bytes |
bigint |
Peak memory usage of all BE nodes, in bytes. |
stmt |
string |
Information about the SQL statement that is executed. |
reserve1 |
string |
Reserved field 1 |
reserve2 |
string |
Reserved field 2 |
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot