Help Center/ GaussDB/ User Guide/ Logs and Auditing/ Downloading Error Logs and Slow Query Logs of a GaussDB Instance
Updated on 2024-10-08 GMT+08:00

Downloading Error Logs and Slow Query Logs of a GaussDB Instance

GaussDB allows you to download slow query logs and error logs. Slow query logs help you locate slow SQL statement execution problems. Error logs help you locate instance problems.

Precautions

  • CNs and DNs of the instance are normal.
  • The IaaS network is normal.

Slow Query Logs

  1. Log in to the management console.
  2. Click in the upper left corner and select a region and project.
  3. Click in the upper left corner of the page and choose Databases > GaussDB.
  4. On the Instances page, click the name of the target instance to go to the Basic Information page.
  5. In the navigation pane on the left, choose Log Analysis.
  6. The system checks whether there has been a slow query log task in the last 5 minutes and, if there is not, generates one. Click the Slow Query Logs tab. Click Download in the Operation column of the record whose status is Preparation completed.

    After the log is downloaded, you can analyze the log on the local PC.

    Figure 1 Downloading a slow query log

    Logs generated in the last 12 hours are collected for the analysis of slow query logs.

    Table 1 describes the fields contained in slow query logs.
    Table 1 Fields in slow query logs

    Field

    Type

    Description

    message_version

    String

    Log format version. The value is fixed at V1.

    db_name

    name

    Database name.

    schema_name

    name

    Schema name.

    origin_node

    integer

    Node name.

    user_name

    name

    Username.

    application_name

    text

    Name of the application that sends a request.

    client_addr

    text

    IP address of the client that sends a request.

    client_port

    integer

    Port number of the client that sends a request.

    unique_query_id

    bigint

    ID of the normalized SQL statement.

    debug_query_id

    bigint

    ID of the unique SQL statement. Some statements are not unique. For example, the value of debug_query_id in the Parse packet, DCL statements, and TCL statements is 0.

    query

    text

    Normalized SQL (available only on CNs). When track_stmt_parameter is enabled, complete SQL statements are displayed.

    start_time

    timestamp with time zone

    Time when a statement starts.

    finish_time

    timestamp with time zone

    Time when a statement ends.

    slow_sql_threshold

    bigint

    Standard for slow SQL statement execution.

    transaction_id

    bigint

    Transaction ID.

    thread_id

    bigint

    ID of an execution thread.

    session_id

    bigint

    Session ID of a user.

    n_soft_parse

    bigint

    Number of soft parses. The value of n_soft_parse plus the value of n_hard_parse may be greater than the value of n_calls because the number of subqueries is not counted in the value of n_calls.

    n_hard_parse

    bigint

    Number of hard parses. The value of n_soft_parse plus the value of n_hard_parse may be greater than the value of n_calls because the number of subqueries is not counted in the value of n_calls.

    query_plan

    text

    Statement execution plan.

    n_returned_rows

    bigint

    Number of rows in the result set returned by the SELECT statement.

    n_tuples_fetched

    bigint

    Number of rows randomly scanned.

    n_tuples_returned

    bigint

    Number of rows sequentially scanned.

    n_tuples_inserted

    bigint

    Number of rows inserted.

    n_tuples_updated

    bigint

    Number of rows updated.

    n_tuples_deleted

    bigint

    Number of rows deleted.

    n_blocks_fetched

    bigint

    Number of buffer block access times.

    n_blocks_hit

    bigint

    Number of buffer block hits.

    db_time

    bigint

    Valid DB time, which is accumulated if multiple threads are involved (unit: microsecond).

    cpu_time

    bigint

    CPU time (unit: microsecond).

    execution_time

    bigint

    Execution time in the executor (unit: microsecond).

    parse_time

    bigint

    SQL parsing time (unit: microsecond).

    plan_time

    bigint

    SQL plan generation time (unit: microsecond).

    rewrite_time

    bigint

    SQL rewriting time (unit: microsecond).

    pl_execution_time

    bigint

    Execution time of PL/pgSQL (unit: microsecond).

    pl_compilation_time

    bigint

    Compilation time of PL/pgSQL (unit: microsecond).

    data_io_time

    bigint

    I/O time (unit: microsecond).

    net_send_info

    text

    Network status of messages sent through a physical connection, including the time (in microseconds), number of calls, and throughput (in bytes). In a distributed database, CNs communicate with each other, CNs communicate with the client, and CNs communicate with DNs through physical connections. This column can be used to analyze the network overhead of SQL statements in a distributed system. Example: {"time":xxx, "n_calls":xxx, "size":xxx}.

    net_recv_info

    text

    Network status of messages sent through a physical connection, including the time (in microseconds), number of calls, and throughput (in bytes). In a distributed database, CNs communicate with each other, CNs communicate with the client, and CNs communicate with DNs through physical connections. This column can be used to analyze the network overhead of SQL statements in a distributed system. Example: {"time":xxx, "n_calls":xxx, "size":xxx}.

    net_stream_send_info

    text

    Network status of messages sent through a logical connection, including the time (in microseconds), number of calls, and throughput (in bytes). In a distributed database, DNs of different shards communicate with each other through logical connections. This column can be used to analyze the network overhead of SQL statements in a distributed system. Example: {"time":xxx, "n_calls":xxx, "size":xxx}.

    net_stream_recv_info

    text

    Network status of messages received through a logical connection, including the time (in microseconds), number of calls, and throughput (in bytes). In a distributed database, DNs of different shards communicate with each other through logical connections. This column can be used to analyze the network overhead of SQL statements in a distributed system. Example: {"time":xxx, "n_calls":xxx, "size":xxx}.

    lock_count

    bigint

    Number of locks.

    lock_time

    bigint

    Time required for locking.

    lock_wait_count

    bigint

    Number of lock waits.

    lock_wait_time

    bigint

    Time required for lock waiting.

    lock_max_count

    bigint

    Maximum number of locks.

    lwlock_count

    bigint

    Number of lightweight locks (reserved).

    lwlock_wait_count

    bigint

    Number of lightweight lock waits.

    lwlock_time

    bigint

    Time required for lightweight locking (reserved).

    details

    bytea

    List of wait events and statement lock events.

    When the value of the record level is greater than or equal to L0, the list of waiting events starts to be recorded. It displays statistics about wait events on the current node. For details about key events, see Table: Waiting State List, Table: List of Wait Events Corresponding to Lightweight Locks, Table: List of I/O Wait Events, and Table: List of Wait Events Corresponding to Transaction Locks. You can also view the list of all events in the system in the wait_event_info view. For details about the impact of each transaction lock on services, see LOCK.

    When the value of the record level is L2, the list of statement lock events is recorded. The list records events in chronological order. The number of records is affected by the value of the track_stmt_details_size parameter.

    This field is in binary format and needs to be read by using the parsing function pg_catalog.statement_detail_decode. For details, see Table: statement_detail_decode Parameter Description.

    Events include:

    • Start locking.
    • Complete locking.
    • Start lock waiting.
    • Complete lock waiting.
    • Start unlocking.
    • Complete unlocking.
    • Start lightweight lock waiting.
    • Complete lightweight lock waiting.

    is_slow_sql

    boolean

    Specifies whether the SQL statement is a slow SQL statement.

    • t (true): yes.
    • f (false): no.

    lwlock_wait_time

    bigint

    Time required for lightweight lock waiting.

Error Logs

  1. Log in to the management console.
  2. Click in the upper left corner and select a region and project.
  3. Click in the upper left corner of the page and choose Databases > GaussDB.
  4. On the Instances page, click the name of the target instance to go to the Basic Information page.
  5. In the navigation pane on the left, choose Log Analysis.
  6. On the displayed page, click the Error Logs page, enable Error Log Collection, and click Download in the Operation column of the record whose status is Preparation completed to download the error log file. After the log is downloaded, you can analyze the log on the local PC.

    Error logs are stored in the gs_log directory and are named in the format gaussdb-creation_time.log. The default format of each row of logs in the log file is as follows: Date+Time+Node name+Username+Database name+IP address+Session ID+Transaction ID+Application name+Log level+Log content
    Table 2 Parameters of error logs

    Field

    Description

    Date

    Date when a log is generated. The format is yyyy-mm-dd.

    Time

    Time when a log is generated. The format is hh:mm:ss:ms.

    Node name

    The node to which an error is reported.

    Username

    Username of the database user who triggers log generation.

    Database name

    Name of the database that triggers log generation.

    IP address

    IP address of the client that triggers log generation.

    Thread ID

    Thread ID.

    Session ID

    ID of the session that triggers log generation.

    Transaction ID

    Transaction ID (0 indicates that no transaction ID is assigned).

    Thread name

    Thread name.

    Query ID

    ID of a query initiated by a user, which is recorded in the background.

    Module name

    Module name.

    Log level

    Log level, such as FATAL, ERROR, or LOG. Different log levels indicate different severities.

    Log content

    Log content.