Help Center/ GaussDB/ Feature Guide(Distributed_V2.0-8.x)/ Storage Engine/ Ustore/ Common Problems and Troubleshooting Methods/ Write Performance Deteriorates Occasionally When a Large Number of Concurrent Updates Are Performed During Long Query Execution
Updated on 2025-05-29 GMT+08:00

Write Performance Deteriorates Occasionally When a Large Number of Concurrent Updates Are Performed During Long Query Execution

Symptom

When a table-level full scan long query is executed, a large number of concurrent page updates occur during the scan. As a result, the write performance of some DML statements deteriorates.

Analysis

For a long query (for example, more than two hours) in the full table scan scenario, before a page is scanned, a large number of concurrent updates (for example, more than 100,000 updates) occur on the page. When the page is scanned later, a large number of historical versions need to be accessed to obtain visible tuples (based on the MVCC mechanism). Because a page read lock is held during single-page scan, if the page needs to be written at this time, the write is blocked until read on the page tuple is complete.

Troubleshooting

  1. Check whether long queries and DML statements canceled due to timeout exist based on slow SQL alarms and the statement_history view.
  2. Check the details of the canceled DML statements queried from statement_history in 1, use the statement_detail_decode system function to parse the details field, and obtain the wait events. If the top wait event is BufferContentLock, there is a high probability that this problem occurs.

Solution

Prevention: Do not perform full table scan long queries on a table involving high-concurrency operations. You are advised to perform long queries on the standby node.

Handling: Check whether this scenario is triggered based on slow SQL alarms. You can interrupt long queries to avoid continuous impact on services.