SQL Usage
Database SQL Query
- Optimize the ORDER BY ... LIMIT statements by indexes to improve execution efficiency.
- If statements contain ORDER BY, GROUP BY, or DISTINCT, ensure that the result set filtered by the WHERE condition contains at most 1,000 lines. Otherwise, the SQL statements are executed slowly.
- For ORDER BY, GROUP BY, and DISTINCT statements, use indexes to directly retrieve sorted data. For example, use key(a,b) in where a=1 order by b.
- When using JOIN, use indexes on the same table in the WHERE condition.
select t1.a, t2.b from t1,t2 where t1.a=t2.a and t1.b=123 and t2.c= 4
If the t1.c and t2.c fields have the same value, only b in the index (b,c) on t1 is used.
If you change t2.c=4 in the WHERE condition to t1.c=4, you can use the complete index. This may occur during field redundancy design (denormalization).
- If deduplication is not required, use UNION ALL instead of UNION.
As UNION ALL does not deduplicate and sort the data, it runs faster than UNION. If deduplication is not required, use UNION ALL preferentially.
- To implement pagination query in code, specify that if count is set to 0, the subsequent pagination statements are not executed.
- Do not frequently execute COUNT on a table. It takes a long time to perform COUNT on a table with a large amount of data. Generally, the response speed is in seconds. If you need to frequently perform the COUNT operation on a table, introduce a special counting table.
- If only one record is returned, use LIMIT 1. If data is correct and the number of returned records in the result set can be determined, use LIMIT as soon as possible.
- When evaluating the efficiency of DELETE and UPDATE statements, change the statements to SELECT and then run EXPLAIN. A large number of SELECT statements will slow down the database, and write operations will lock tables.
- TRUNCATE TABLE is faster and uses fewer system and log resources than DELETE. If the table to be deleted does not have a trigger and the entire table needs to be deleted, TRUNCATE TABLE is recommended.
- TRUNCATE TABLE does not write deleted data to log files.
- A TRUNCATE TABLE statement has the same function as a DELETE statement without a WHERE clause.
- TRUNCATE TABLE statements cannot be written with other DML statements in the same transaction.
- Do not use negative queries to avoid full table scanning. Negative queries indicate the following negative operators are used: NOT, !=, <>, NOT EXISTS, NOT IN, and NOT LIKE.
If a negative query is used, the index structure cannot be used for binary search. Instead, the entire table needs to be scanned.
- Avoid using JOIN to join more than three tables. The data types of the fields to be joined must be the same.
- During multi-table join query, ensure that the associated fields have indexes. When joining multiple tables, select the table with a smaller result set as the driving table to join other tables. Pay attention to table indexes and SQL performance even if two tables are joined.
- To query ultra-large tables, you also need to comply with the following rules:
- To locate slow SQL statements, enable slow query logs.
- Do not perform column operations, for example, SELECT id WHERE age+1=10. Any operation on a column, including database tutorial functions and calculation expressions, will cause table scans. Move operations to the right of the equal sign (=) during the query.
- Split larger statements into smaller and simpler statements to reduce lock time and avoid blocking the entire database.
- Do not use SELECT*.
- Change OR to IN. The efficiency of OR is at the n level, while the efficiency of IN is at the log(n) level. Try to keep the number of INs below 200.
- Avoid using stored procedures and triggers in applications.
- Avoid using queries in the %xxx format.
- Avoid using JOIN and try to query a single table whenever possible.
- Use the same type for comparison, for example, '123' to '123' or 123 to 123.
- Avoid using the != or <> operators in the WHERE clause. Otherwise, the engine will not use indexes and instead scan the full table.
- For consecutive values, use BETWEEN instead of IN: SELECT id FROM t WHERE num BETWEEN1AND5.
SQL Statement Development
- Split simple SQL statements.
For example, in the OR condition f_phone='10000' or f_mobile='10000', the two fields have their own indexes, but only one of them can be used.
You can split the statement into two SQL statements or use UNION ALL.
- If possible, perform the complex SQL calculation or service logic at the service layer.
- Use a proper pagination method to improve pagination efficiency. Skipping paging is not recommended for large pages.
-
Negative example: SELECT * FROM table1 ORDER BY ftime DESC LIMIT 10000,10;
It causes a large number of I/O operations because MySQL uses the read-ahead policy.
-
Positive example: SELECT * FROM table1 WHERE ftime < last_time ORDER BY ftime DESC LIMIT 10;
Recommended pagination method: Transfer the threshold value the last pagination.
-
- Execute UPDATE statements in transactions based on primary keys or unique keys. Otherwise, a gap lock is generated and the locked data range is expanded. As a result, the system performance deteriorates and a deadlock occurs.
- Do not use foreign keys and cascade operations. The problems of foreign keys can be solved at the application layer.
If student_id is a primary key in the student table, student_id is a foreign key in the score table. If student_id is updated in the student table, student_id in the score table is also updated. This is a cascade update.
- Foreign keys and cascade updates are suitable for single-node clusters with low concurrency and are not suitable for distributed cluster with high concurrency.
- Cascade updates may cause strong blocks and foreign keys affect the INSERT operations.
- If possible, do not use IN. If it is required, ensure that the number of set elements after IN should be at most 500.
- To reduce interactions with the database, use batches of SQL statements, for example, INSERT INTO … VALUES (*),(*),(*)....(*);. Try to keep the number of * items below 100.
- Do not use stored procedures, which are difficult to debug, extend, and transplant.
- Do not use triggers, event schedulers, or views for service logic. The service logic must be processed at the service layer to avoid logical dependency on the database.
- Do not use implicit type conversion.
The conversion rules are as follows:
- If at least one of the two parameters is NULL, the comparison result is also NULL. However, when <=> is used to compare two NULL values, 1 is returned.
- If both parameters are character strings, they are compared as character strings.
- If both parameters are integers, they are compared as integers.
- When one parameter is a hexadecimal value and the other parameter is a non-digit value, they are compared as binary strings.
- If one parameter is a TIMESTAMP or DATETIME value and the other parameter is a CONSTANT value, they are compared as TIMESTAMP values.
- If one parameter is a DECIMAL value and other parameter is a DECIMAL or INTEGER value, they are compared as DECIMAL values. If the other argument is a FLOATING POINT value, they are compared as FLOATING POINT values.
- In other cases, both parameters are compared as FLOATING POINT values.
- If one parameter is a string and the other parameter is an INT value, they are compared as FLOATING POINT values (by referring to item 7)
For example, the type of f_phone is varchar. If f_phone in (098890) is used in the WHERE condition, two parameters are compared as FLOATING POINT values. In this case, the index cannot be used, affecting database performance.
If f_user_id = '1234567', the number is directly compared as a character string. For details, see item 2.
- If possible, ensure that the number of SQL statements in a transaction should be as small as possible, no more than 5. Long transactions will lock data for a long time, generate many caches in MySQL, and occupy many connections.
- Do not use NATURAL JOIN.
NATURAL JOIN is used to implicitly join column, which is difficult to understand and may cause problems. The NATURAL JOIN statement cannot be transplanted.
- For tables with tens of millions or hundreds of millions of data records, you are advised to use the following methods to improve data write efficiency:
- Delete unnecessary indexes.
When data is updated, the index data is also updated. For tables with large amounts of data, avoid creating too many indexes as this can slow down the update process. Delete unnecessary indexes.
- Insert multiple data records in batches.
This is because batch insertion only requires a single remote request to the database.
Example:
insert into tb1 values(1,'value1'); insert into tb2 values(2,'value2'); insert into tb3 values(3,'value3');
After optimization:
insert into tb values(1,'value1'),(2,'value2'),(3,'value3');
- When inserting multiple data records, manually control transactions.
By manually controlling the transaction, multiple execution units can be merged into a single transaction, avoiding the overhead of multiple transactions while ensuring data integrity and consistency.
Example:
insert into table1 values(1,'value1'),(2,'value2'),(3,'value3'); insert into table2 values(4,'value1'),(5,'value2'),(6,'value3'); insert into table3 values(7,'value1'),(8,'value2'),(9,'value3');
After optimization:
start transaction; insert into table1 values(1,'value1'),(2,'value2'),(3,'value3'); insert into table2 values(4,'value1'),(5,'value2'),(6,'value3'); insert into table3 values(7,'value1'),(8,'value2'),(9,'value3'); commit;
Having too many merged statements can lead to large transactions, which will lock the table for a long time. Evaluate service needs and control the number of statements in a transaction accordingly.
- When inserting data with primary keys, try to insert them in a sequential order of the primary keys. You can use AUTO_INCREMENT.
Inserting data in a random order of the primary keys can cause page splitting, which can negatively impact performance.
Example:
Inserting data in a random order of primary keys: 6 2 9 7 2
Inserting data in a sequential order of primary keys: 1 2 4 6 8
- Avoid using UUIDs or other natural keys, such as ID card numbers, as primary keys.
UUIDs generated each time are unordered, and inserting them as primary keys can cause page splitting, which can negatively impact performance.
- Avoid modifying primary keys during service operations.
Modifying primary keys requires modifying the index structure, which can be costly.
- Reduce the length of primary keys as much as possible.
- Do not use foreign keys to maintain foreign key relationships. Use programs instead.
- Separate read and write operations. Direct read requests to read replicas to avoid slow insertion caused by I/Os.
- Delete unnecessary indexes.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot