Slow SQL Queries After a Large Amount of Data Is Deleted from a Large Table
Scenario
After multiple wide columns of data (the length of each record is about 1 GB) are deleted at a time, performing an INSERT, DELETE, UPDATE, or SELECT operation on the same table again takes an extended period of time. After about 20 minutes, the problem is resolved.
Problem Reproduction
- Assume that the value of max_allowed_packet is 1073741824.
- Create a table.
CREATE TABLE IF NOT EXISTS zstest1 ( id int PRIMARY KEY not null, c_longtext LONGTEXT );
- Insert data to the table.
insert into zstest1 values(1, repeat('a', 1073741800)); insert into zstest1 values(2, repeat('a', 1073741800)); insert into zstest1 values(3, repeat('a', 1073741800)); insert into zstest1 values(4, repeat('a', 1073741800)); insert into zstest1 values(5, repeat('a', 1073741800)); insert into zstest1 values(6, repeat('a', 1073741800)); insert into zstest1 values(7, repeat('a', 1073741800)); insert into zstest1 values(8, repeat('a', 1073741800)); insert into zstest1 values(9, repeat('a', 1073741800)); insert into zstest1 values(10, repeat('a', 1073741800));
- Delete data from the table.
delete from zstest1;
- Execute a query statement.
select id from zstest1; //The execution is slow.
Possible Causes
After the DELETE operation is performed, the background purge thread clears the records marked with delete mark. Due to the large amount of data to be deleted, the purge thread obtains the SX lock of the index root node where the page is located when traversing and releasing the page. As a result, the SELECT statement cannot obtain the RW lock of the root page and keeps waiting.
Solution
- This phenomenon is normal. After the purge operation is complete, the fault is rectified.
- Scale up the instance specifications to improve the purge efficiency.
- Do not delete a large amount of data at a time. To delete all data from a table, use the truncate table statement.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.