Updated on 2023-10-23 GMT+08:00

STATEMENT_HISTORY

STATEMENT_HISTORY displays information about execution statements on the current node. To query a view, you must have the sysadmin or monitor admin permission. The result can be queried only in the system database but cannot be queried in the user database.

Table 1 STATEMENT_HISTORY columns

Name

Type

Description

dbname

name

Database name.

schemaname

name

Schema name.

origin_node

integer

Node name.

user_name

name

Username.

application_name

text

Name of the application that sends a request.

client_addr

text

IP address of the client that sends a request.

client_port

integer

Port number of the client that sends a request.

unique_query_id

bigint

ID of the normalized SQL statement.

debug_query_id

bigint

ID of the unique SQL statement.

query

text

Normalized SQL (available only on CNs).

start_time

timestamp with time zone

Time when a statement starts.

finish_time

timestamp with time zone

Time when a statement ends.

slow_sql_threshold

bigint

Standard for slow SQL statement execution.

transaction_id

bigint

Transaction ID.

thread_id

bigint

ID of an execution thread.

session_id

bigint

Session ID of a user.

n_soft_parse

bigint

Number of soft parsing times. The value of n_soft_parse plus the value of n_hard_parse may be greater than the value of n_calls because the number of subqueries is not counted in the value of n_calls.

n_hard_parse

bigint

Number of hard parsing times. The value of n_soft_parse plus the value of n_hard_parse may be greater than the value of n_calls because the number of subqueries is not counted in the value of n_calls.

query_plan

text

Statement execution plan.

n_returned_rows

bigint

Number of rows in the result set returned by the SELECT statement.

n_tuples_fetched

bigint

Number of rows randomly scanned.

n_tuples_returned

bigint

Number of rows sequentially scanned.

n_tuples_inserted

bigint

Number of rows inserted.

n_tuples_updated

bigint

Number of rows updated.

n_tuples_deleted

bigint

Number of rows deleted.

n_blocks_fetched

bigint

Number of buffer block access times.

n_blocks_hit

bigint

Number of buffer block hits.

db_time

bigint

Valid DB time, which is accumulated if multiple threads are involved (unit: μs)

cpu_time

bigint

CPU time (unit: μs).

execution_time

bigint

Execution time in the executor (unit: μs).

parse_time

bigint

SQL parsing time (unit: μs)

plan_time

bigint

SQL plan generation time (unit: μs).

rewrite_time

bigint

SQL rewriting time (unit: μs).

pl_execution_time

bigint

Execution time of PL/pgSQL (unit: μs).

pl_compilation_time

bigint

Compilation time of PL/pgSQL (unit: μs)

data_io_time

bigint

I/O time (unit: μs).

net_send_info

text

Network status of messages sent through a physical connection, including the time (unit: μs), number of calls, and throughput (unit: byte). In a distributed database, CNs communicate with each other, CNs communicate with customer service ends, and CNs communicate with DNs through physical connections. This column can be used to analyze the network overhead of SQL statements in a distributed system. Example: {"time":xxx, "n_calls":xxx, "size":xxx}.

net_recv_info

text

Network status of messages received through a physical connection, including the time (unit: μs), number of calls, and throughput (unit: byte). In a distributed database, CNs communicate with each other, CNs communicate with customer service ends, and CNs communicate with DNs through physical connections. This column can be used to analyze the network overhead of SQL statements in a distributed system. Example: {"time":xxx, "n_calls":xxx, "size":xxx}.

net_stream_send_info

text

Network status of messages sent through a logical connection, including the time (unit: μs), number of calls, and throughput (unit: byte). In a distributed database, DNs of different shards communicate with each other through logical connections. This column can be used to analyze the network overhead of SQL statements in a distributed system.

Example: {"time":xxx, "n_calls":xxx, "size":xxx}.

net_stream_recv_info

text

Network status of messages received through a logical connection, including the time (unit: μs), number of calls, and throughput (unit: byte). In a distributed database, DNs of different shards communicate with each other through logical connections. This column can be used to analyze the network overhead of SQL statements in a distributed system.

Example: {"time":xxx, "n_calls":xxx, "size":xxx}.

lock_count

bigint

Number of locks.

lock_time

bigint

Time required for locking.

lock_wait_count

bigint

Number of lock waits.

lock_wait_time

bigint

Time required for lock waiting.

lock_max_count

bigint

Maximum number of locks.

lwlock_count

bigint

Number of lightweight locks (reserved).

lwlock_wait_count

bigint

Number of lightweight lock waits.

lwlock_time

bigint

Time required for lightweight locking (reserved).

lwlock_wait_time

bigint

Time required for lightweight lock waiting.

details

bytea

List of statement lock events, which are recorded in chronological order. The number of records is affected by the track_stmt_details_size parameter. This column is binary and needs to be read using the parsing function pg_catalog.statement_detail_decode. For details, see Other Functions.

Events include:

Start locking.

Complete locking.

Start lock waiting.

Complete lock waiting.

Start unlocking.

Complete unlocking.

Start lightweight lock waiting.

Complete lightweight lock waiting.

is_slow_sql

boolean

Whether the SQL statement is a slow SQL statement.

trace_id

text

Driver-specific trace ID, which is associated with an application request.

advise

text

Risks which may cause slow SQL statements. (Multiple risks may exist at the same time.)
  • Cast Function Cause Index Miss.: Index matching may fail due to implicit conversion.
  • Limit too much rows.: The SQL statement execution may slow down due to a large limit value.
  • Proleakproof of function is false.: The proleakproof of the function is set to false. In this case, the function does not use statistics when generating a plan due to data leakage risks. As a result, the accuracy of the generated plan is affected and the SQL statement execution may slow down.