GLOBAL_OPERATOR_HISTORY
GLOBAL_OPERATOR_HISTORY displays the records about operators after jobs are executed by the current user on all CNs.
Name |
Type |
Description |
---|---|---|
queryid |
bigint |
Internal query ID used for statement execution |
pid |
bigint |
Backend thread ID |
plan_node_id |
integer |
Plan node ID of the execution plan of a query |
plan_node_name |
text |
Name of the operator corresponding to the plan node ID |
start_time |
timestamp with time zone |
Time when the operator starts to process the first data record |
duration |
bigint |
Total execution time of the operator (unit: ms) |
query_dop |
integer |
DOP of the operator |
estimated_rows |
bigint |
Number of rows estimated by the optimizer |
tuple_processed |
bigint |
Number of elements returned by the operator |
min_peak_memory |
integer |
Minimum peak memory used by the operator on all DNs (unit: MB) |
max_peak_memory |
integer |
Maximum peak memory used by the operator on all DNs (unit: MB) |
average_peak_memory |
integer |
Average peak memory used by the operator on all DNs (unit: MB) |
memory_skew_percent |
integer |
Memory usage skew of the operator among each DNs |
min_spill_size |
integer |
Minimum spilled data among all DNs when a spill occurs (unit: MB; default value: 0) |
max_spill_size |
integer |
Maximum spilled data among all DNs when a spill occurs (unit: MB; default value: 0) |
average_spill_size |
integer |
Average spilled data among all DNs when a spill occurs (unit: MB; default value: 0) |
spill_skew_percent |
integer |
DN spill skew when a spill occurs |
min_cpu_time |
bigint |
Minimum execution time of the operator on all DNs (unit: ms) |
max_cpu_time |
bigint |
Maximum execution time of the operator on all DNs (unit: ms) |
total_cpu_time |
bigint |
Total execution time of the operator on all DNs (unit: ms) |
cpu_skew_percent |
integer |
Skew of the execution time among DNs |
warning |
text |
Warning. The following warnings are displayed:
|
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.