Usage Scenarios
Scenario Description
When you have a large data size to insert into a database, using batch insertion is much more efficient than executing SQL statements multiple times, one at a time.
This section illustrates various operations using the JDBC driver, including establishing database connections, utilizing transactions, executing batch insertion, and obtaining column information in the result set.
Trigger Conditions
JDBC adds SQL statements to the batch execution list through its addBatch API. The batch operation is then executed through the executeBatch API.
Impact on Services
- Lower network interaction costs
Combining multiple INSERT statements into one batch operation significantly reduces the number of round trips between the client and the database. This enhances the overall throughput and minimizes the impact of network congestion on performance.
- Higher data processing efficiency
In single-record insertion, the database must parse the syntax and generate an execution plan for each SQL statement. In contrast, batch insertion requires parsing the syntax and generating the plan only once, eliminating repetitive tasks and saving CPU cycles and memory allocation time.
- Reduced system resource usage and overhead
In single-record insertion, transaction commits or Xlog writes occur at least once. In contrast, batch insertion allows multiple records to be inserted within a single transaction, significantly reducing the frequency of transaction commits, Xlog pressure, and transaction management overhead. In addition, it decreases the total number of network packet processing, transaction management, log write, and row format conversion tasks, which in turn lowers the CPU loads and temporary memory usage of the database server. This results in more resources being available for core query and computing operations.
- Higher memory usage
When large data sizes are involved, constructing SQL statements for batch insertion can significantly increase memory usage. This is particularly noticeable when you construct SQL statements through string concatenation, as it can lead to a sharp rise in memory consumption. Large-size batch processing may exceed the maximum SQL length limit of the database or driver, or trigger other parameter restrictions, potentially leading to errors or performance issues.
Here is a detailed comparison between batch insertion and single-record insertion.
Mode |
Advantages |
Disadvantages |
---|---|---|
Single-record insertion |
|
|
Batch insertion |
|
|
Applicable Versions
This applies only to GaussDB 503.1.0 and later versions.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot