Optimizing the Parameters of a Job for Migrating Data from MySQL to Doris
Optimizing Source Parameters
Optimization of data extraction from MySQL
You can click Add Custom Attribute in the Configure Task area and add MySQL synchronization parameters.

The following tuning parameters are available.
Parameter |
Type |
Default Value |
Description |
---|---|---|---|
scan.incremental.snapshot.backfill.skip |
boolean |
true |
Whether to skip reading Binlog data. The default value is true. Skipping reading Binlog data can effectively reduce memory usage. Note that skipping reading Binlog data provides only at-least-once guarantee. |
scan.incremental.snapshot.chunk.size |
int |
50000 |
Shard size, which determines the maximum number of data records in a single shard and the number of shards in the full migration phase. The larger the shard size, the more data records in a single shard, and the smaller the number of shards. If a table has a large number of records, the job will divide the table into shards, which occupy too much memory. To avoid this issue, adjust the value of this parameter based on the number of records in the table. If scan.incremental.snapshot.backfill.skip is false, the real-time processing migration job caches data of a single shard. In this case, a larger shard occupies more memory, causing memory overflow. To avoid this issue, reduce the shard size. |
scan.snapshot.fetch.size |
int |
1024 |
Maximum number of data records that can be extracted from the MySQL database in a single request during full data extraction. Increasing the number of requests can reduce the number of requests to the MySQL database and improve performance. |
debezium.max.queue.size |
int |
8192 |
Number of data cache queues. The default value is 8192. If the size of a single data record in the source table is too large (for example, 1 MB), memory overflow occurs when too much data is cached. You can reduce the value. |
debezium.max.queue.size.in.bytes |
int |
0 |
Size of the data cache queue. The default value is 0, indicating that the cache queue is calculated based on the number of data records instead of the data size. If debezium.max.queue.size cannot effectively limit memory usage, you can explicitly set this parameter to limit the size of cached data. |
jdbc.properties.socketTimeout |
int |
300000 |
Timeout interval of the socket for connecting to the MySQL database in the full migration phase. The default value is 5 minutes. If the MySQL database is overloaded, and the SocketTimeout exception occurs for a job, you can increase the value of this parameter. |
jdbc.properties.connectTimeout |
int |
60000 |
Timeout interval of the connection to the MySQL database in the full migration phase. The default value is 1 minute. If the MySQL database is overloaded, and the ConnectTimeout exception occurs for a job, you can increase the value of this parameter. |
Parameter |
Type |
Default Value |
Description |
---|---|---|---|
debezium.max.queue.size |
int |
8192 |
Number of data cache queues. The default value is 8192. If the size of a single data record in the source table is too large (for example, 1 MB), memory overflow occurs when too much data is cached. You can reduce the value. |
debezium.max.queue.size.in.bytes |
int |
0 |
Size of the data cache queue. The default value is 0, indicating that the cache queue is calculated based on the number of data records instead of the data size. If debezium.max.queue.size cannot effectively limit memory usage, you can explicitly set this parameter to limit the size of cached data. |
Optimizing Destination Parameters
You can modify writing parameters in the Doris destination configuration or click View and Edit in the advanced configuration to add advanced attributes.

Parameter |
Type |
Default Value |
Unit |
Description |
---|---|---|---|---|
sink.properties.format |
string |
json |
- |
Data format used by Stream Load. The value can be json or csv. Using the CSV format and compression parameters can improve the write rate. However, the CSV format is not recommended for the following Doris versions: 1.2, 2.0.x (x < 14), 2.1.x (x < 6), and 3.0.x (x < 1). Open-source issues may cause write exceptions for special characters if the CSV format is used. |
sink.properties.Content-Encoding |
string |
- |
- |
Compression format of the HTTP header message body. Currently, only CSV files can be compressed, and the .gzip format is supported. |
sink.properties.compress_type |
string |
- |
- |
File compression format. Currently, only CSV files can be compressed. The .gz, .lzo, .bz2, .lz4, .lzop, and .deflate compression formats are supported. |
doris.sink.flush.tasks |
int |
1 |
- |
Number of concurrent flushes of a single TaskManager. You can increase the value of this parameter to improve the write rate when there are sufficient resources. |
sink.batch.interval |
string |
1s |
h/min/s |
Interval at which an asynchronous thread writes data. You can increase the value of this parameter to reduce the database I/O if there is a large amount of data at the source. For example, you can increase the value to 30s. |
sink.batch.size |
int |
20000 |
- |
Maximum number of rows that can be written (inserted, updated, or deleted) at a time. You can increase the value of this parameter to reduce the database I/O if there is a large amount of data at the source. For example, you can increase the value to 50000. |
sink.batch.bytes |
int |
10485760 |
bytes |
Maximum number of bytes that can be written (inserted, updated, or deleted) at a time. You can increase the value of this parameter to reduce the database I/O if there is a large amount of data at the source. For example, you can increase the value to 50485760. |
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot