Table Bloating Causes Slow SQL Query and Failed Data Loading on the GUI
Symptom
When an SQL query runs in the database for several seconds without yielding results within 20 seconds, it leads to a GUI timeout, preventing the display of charts to users.
Possible Causes
- A large number of tables are frequently added, deleted, and modified and are not cleared in a timely manner. As a result, a large amount of dirty data exists and the table data size expands, resulting in slow query.
- The memory parameters are configured improperly.
Cause Analysis
- Check with the customer to verify if certain services are slow. Obtain any slow SQL statements and print the execution plan. The majority of the time is being consumed by the index scan, which could be due to I/O contention. After monitoring the I/O, no bottlenecks in terms of resource usage are detected.
- Query the active SQL statements. A large number of create index statements are found. Confirm with the customer whether the service is proper.
1
SELECT * from pg_stat_activity where state !='idle' and usename !='Ruby';
- According to the execution plan, it takes a long time to execute statements on some DNs. No table skew occurs.
1
SELECT table_skewness('table name');
- Contact the O&M personnel to log in to the cluster instance and check the memory parameters. The parameters are configured improperly and need to be adjusted.
- The total memory size of a single node is 256 GB.
- The value of max_process_memory is 12 GB, which is too small.
- The value of shared_buffers is 32 MB, which is too small.
- The value of work_mem is 64 MB for both CNs and DNs.
- The value of max_active_statements is -1. (The number of concurrent statements is not limited.)
- Configure this parameter as follows:
gs_guc set -Z coordinator -Z datanode -N all -I all -c "max_process_memory=25GB"
gs_guc set -Z coordinator -Z datanode -N all -I all -c "shared_buffers=8GB"
gs_guc set -Z coordinator -Z datanode -N all -I all -c "work_mem=128MB"
- It is found that the table data size expands excessively. After the VACUUM FULL operation is performed on an 8 GB table that contains 50,000 data records, the table size decreases to 5.6 MB.
Handling Procedure
- Perform the VACUUM FULL operation on large tables to clear dirty data.
- Configure GUC memory parameters.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot