PGXC_WLM_SESSION_HISTORY
PGXC_WLM_SESSION_HISTORY displays load management information for completed jobs executed on all CNs. This view is used by Data Manager to query data from a database. Data in the database is cleared every 3 minutes. For details, see GS_WLM_SESSION_HISTORY.
The columns are similar to those in GS_WLM_SESSION_HISTORY. For details, see Table 1.
Name |
Type |
Description |
---|---|---|
datid |
oid |
OID of the database connected to the backend. |
dbname |
text |
Name of the database connected to the backend. |
schemaname |
text |
Schema name. |
nodename |
text |
Name of the CN where the statement is run. |
username |
text |
Username used for connecting to the backend. |
application_name |
text |
Name of the application connected to the backend. |
client_addr |
inet |
IP address of the client connected to the backend. If this column is null, it indicates either that the client is connected via a Unix socket on the server machine or that this is an internal process such as autovacuum. |
client_hostname |
text |
Host name of the connected client, as reported by a reverse DNS lookup of client_addr. This column will only be non-null for IP connections, and only when log_hostname is enabled. |
client_port |
integer |
TCP port number used by the client to communicate with the backend. If a Unix socket is used, it is –1. |
query_band |
text |
Job type, which can be set through the GUC parameter query_band and is null string by default. |
block_time |
bigint |
Blocking time before statement execution, including statement parsing and optimization time, in milliseconds. |
start_time |
timestamp with time zone |
Start time of statement execution. |
finish_time |
timestamp with time zone |
End time of statement execution. |
duration |
bigint |
Execution time of a statement. The unit is ms. |
estimate_total_time |
bigint |
Estimated execution time of a statement. The unit is ms. |
status |
text |
End status of statement execution: finished for normal and aborted for abnormal. The statement status recorded here should be the database server execution status. When the server-side execution is successful and an error occurs when the result set is returned, the statement should be finished. |
abort_info |
text |
Exception information displayed if the final statement execution status is aborted. |
resource_pool |
text |
Resource pool used by the user. |
control_group |
text |
Cgroup used by the statement. |
estimate_memory |
integer |
Estimated memory used by the statement. The unit is MB. |
min_peak_memory |
integer |
Minimum memory peak of a statement across all DNs. The unit is MB. |
max_peak_memory |
integer |
Maximum memory peak of a statement across all DNs. The unit is MB. |
average_peak_memory |
integer |
Average memory usage during statement execution. The unit is MB. |
memory_skew_percent |
integer |
Memory usage skew of a statement among DNs. |
spill_info |
text |
Statement spill information on all DNs. None: The statement has not been spilled to disks on any DNs. All: The statement has been spilled to disks on all DNs. [a:b]: The statement has been spilled to disks on a of b DNs. |
min_spill_size |
integer |
Minimum spilled data among all DNs when a spill occurs. The default value is 0. The unit is MB. |
max_spill_size |
integer |
Maximum spilled data among all DNs when a spill occurs. The default value is 0. The unit is MB. |
average_spill_size |
integer |
Average spilled data among all DNs when a spill occurs. The default value is 0. The unit is MB. |
spill_skew_percent |
integer |
DN spill skew when a spill occurs. |
min_dn_time |
bigint |
Minimum execution time of a statement across all DNs. The unit is ms. |
max_dn_time |
bigint |
Maximum execution time of a statement across all DNs. The unit is ms. |
average_dn_time |
bigint |
Average execution time of a statement across all DNs. The unit is ms. |
dntime_skew_percent |
integer |
Execution time skew of a statement among DNs. |
min_cpu_time |
bigint |
Minimum CPU time of a statement across all DNs. The unit is ms. |
max_cpu_time |
bigint |
Maximum CPU time of a statement across all DNs. The unit is ms. |
total_cpu_time |
bigint |
Total CPU time of a statement across all DNs. The unit is ms. |
cpu_skew_percent |
integer |
CPU time skew of a statement among DNs. |
min_peak_iops |
integer |
Minimum I/O peak of a statement on all DNs (times/s in column-store tables and 10,000 times/s in row-store tables). This function is not enabled in clusters of version 8.1.3. Therefore, you are not advised to refer to this column to analyze memory problems. |
max_peak_iops |
integer |
Maximum I/O peak of a statement on all DNs (times/s in column-store tables and 10,000 times/s in row-store tables). This function is not enabled in clusters of version 8.1.3. Therefore, you are not advised to refer to this column to analyze memory problems. |
average_peak_iops |
integer |
Average I/O peak of a statement on all DNs (times/s in column-store tables and 10,000 times/s in row-store tables). This function is not enabled in clusters of version 8.1.3. Therefore, you are not advised to refer to this column to analyze memory problems. |
iops_skew_percent |
integer |
I/O skew of a statement among DNs. This function is not enabled in clusters of version 8.1.3. You are not advised to refer to this column to analyze memory problems. |
warning |
text |
Warning. The following warnings and warnings related to SQL self-diagnosis tuning are displayed:
|
queryid |
bigint |
Internal query ID used for statement execution. |
query |
text |
Executed statement. |
query_plan |
text |
Execution plan of a statement. |
node_group |
text |
Logical cluster of the user running the statement. |
pid |
bigint |
PID of the backend thread for the statement. |
lane |
text |
Fast/Slow lane where the statement is executed. |
unique_sql_id |
bigint |
ID of the normalized unique SQL. |
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot