High Table Fragmentation Rate
High table fragmentation rate is a common problem in TaurusDB instances. Table fragments mean that table data and indexes are scattered in different physical blocks. These physical blocks may be discontinuous or have some free space, so the storage of table data and indexes on disks is not optimal.
This problem is caused by operations (such as deletion, update, and insertion) on table data. If data rows in tables are continuously modified and moved, data segments in the tables become discontinuous.
Impact and Risk
- Tablespace bloat
High table fragmentation rate causes a large amount of unused space in an instance. It is a waste of space.
- Poor query optimization
If the table fragmentation rate is too high, the optimizer cannot correctly and effectively use indexes, affecting execution plan selection and degrading the query performance.
- Slow SQL execution
If the table fragmentation rate is too high, extra time is required for I/O scanning and defragmentation when SQL statements are executed. As a result, the query and update operations are slow and the response time is prolonged.
Checking Table Fragments
- Run the following SQL statement to check all tables where fragments have been generated:
select table_schema, table_name, data_free, data_length from information_schema.tables where table_schema not in ('information_schema', 'mysql', 'performance_schema', 'sys') and data_free > 0;
- Run the following SQL statement to check the fragments of a specified table:
SELECT table_name, data_length, data_free, data_length FROM information_schema.tables WHERE table_schema = 'database_name' AND table_name = 'table_name';
Table 1 Parameters Parameter
Description
table_schema
Database name
table_name
Table name
data_length
Size of data stored in the table, in bytes
data_free
Size of free space of the table (the size of fragments), in bytes
Frequently deleting or modifying table data can result in a lot of free space. Generally, you can preliminarily determine the fragmentation rate based on the ratio of data_free to data_length.
Clearing Fragments (Reclaiming Space)
To solve the problem of high table fragmentation rate, you are advised to periodically analyze fragments of frequently accessed tables, clear the fragments, and optimize tablespaces to improve performance. Do not perform defragmentation too frequently (every hour or day). Generally, you only need to perform defragmentation once a week or a month based on the site requirements.
The OPTIMIZE TABLE statement can be used to release tablespaces and reorganize physical pages of table data and indexes of TaurusDB instances. This results in reduced table space usage and improved read and write performance.
OPTIMIZE TABLE <table_name>;

- OPTIMIZE TABLE uses online DDL for regular and partitioned InnoDB tables, which reduces downtime for concurrent DML operations.
- OPTIMIZE TABLE using online DDL is not supported for InnoDB tables that contain FULLTEXT indexes. Instead, the table replication algorithm is used.
- OPTIMIZE TABLE briefly locks the table. The overall execution time depends on the table size. Generally, the execution takes a long time and occupies many resources (the storage space that is 1.5 times the size of the table to be optimized must be reserved). To avoid impact on your workloads, you are advised to optimize a table during off-peak hours.
- An online DDL operation may briefly require an exclusive metadata lock on the table during its execution phase. Consequently, a transaction holding a metadata lock can cause an online DDL operation to be blocked. The transaction may have held the lock before or during the online DDL operation. A long transaction, no matter it is active, can cause an online DDL operation to time out.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.