Help Center/
GaussDB/
Developer Guide(Centralized_2.x)/
Performance Tuning/
Optimization Cases/
Case: Adjusting I/O Parameters to Reduce the Log Bloat Rate
Updated on 2023-10-23 GMT+08:00
Case: Adjusting I/O Parameters to Reduce the Log Bloat Rate
- Parameter values before adjustment:
- pagewriter_sleep=2000ms
- bgwriter_delay=2000ms
- max_io_capacity=500MB
- Parameter values after adjustment:
- pagewriter_sleep=100ms
- bgwriter_delay=1s
- max_io_capacity=300MB
- The max_io_capacity parameter is set to a small value because the I/O does not use the maximum value of the previous parameter. This parameter is used to limit the upper limit of the I/O usage of the backend write process.
- Log recycling is triggered only when the number of logs reaches a certain value. The formula for calculating the value is as follows: Value of wal_keep_segments + Value of checkpoint_segments x 2 + 1. If checkpoint_segments is set to 128 and wal_keep_segments is set to 128, the number of logs is (128 + 128 x 2 + 1) x 16 MB = 6 GB.
- Before the parameters are adjusted, the Xlogs of different data volumes bloat in different degrees in the TPC-C data export phase. As a result, GB-level logs bloat. The main cause is that dirty pages are not flushed to disks, the recovery point cannot be pushed forward, and logs cannot be recycled in time. After the parameters are adjusted, the log bloat rate decreases significantly.
- Take the data warehouse 2000 as an example. Before the parameter adjustment, the log size bloats by 10 GB in the data export phase. After the parameter adjustment, the log size remains within the range of the minimum xlog value calculated based on the parameter setting.
Parent topic: Optimization Cases
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
The system is busy. Please try again later.
For any further questions, feel free to contact us through the chatbot.
Chatbot