Help Center/ DataArts Studio/ User Guide/ DataArts Migration (Real-Time Jobs)/ Job Performance Optimization/ Optimizing the Parameters of a Job for Migrating Data from Oracle to Doris
Updated on 2025-09-09 GMT+08:00

Optimizing the Parameters of a Job for Migrating Data from Oracle to Doris

Optimizing Source Parameters

Optimization of data extraction from Oracle

You can click Add Custom Attribute in the Configure Task area and add Oracle synchronization parameters.

Figure 1 Adding custom attributes

The following tuning parameters are available.

Table 1 Tuning parameters for full data synchronization

Parameter

Type

Default Value

Description

scan.snapshot.fetch.size

int

1024

Maximum number of data records that can be extracted from the PostgreSQL database in a single request during full data extraction. Increasing the number of requests can reduce the number of requests to the PostgreSQL database and improve performance.

debezium.max.queue.size

int

8192

Number of data cache queues. The default value is 8192. If the size of a single data record in the source table is too large (for example, 1 MB), memory overflow occurs when too much data is cached. You can reduce the value.

debezium.max.queue.size.in.bytes

int

0

Size of the data cache queue. The default value is 0, indicating that the cache queue is calculated based on the number of data records instead of the data size. If debezium.max.queue.size cannot effectively limit memory usage, you can explicitly set this parameter to limit the size of cached data.

Table 2 Tuning parameters for incremental data synchronization

Parameter

Type

Default Value

Description

debezium.max.queue.size

int

8192

Number of data cache queues. The default value is 8192. If the size of a single data record in the source table is too large (for example, 1 MB), memory overflow occurs when too much data is cached. You can reduce the value.

debezium.max.queue.size.in.bytes

int

0

Size of the data cache queue. The default value is 0, indicating that the cache queue is calculated based on the number of data records instead of the data size. If debezium.max.queue.size cannot effectively limit memory usage, you can explicitly set this parameter to limit the size of cached data.

Optimizing Destination Parameters

You can modify writing parameters in the Doris destination configuration or click View and Edit in the advanced configuration to add advanced attributes.

Figure 2 Adding advanced attributes
Table 3 Parameters for optimizing Doris writing

Parameter

Type

Default Value

Unit

Description

sink.properties.format

string

json

-

Data format used by Stream Load. The value can be json or csv. Using the CSV format and compression parameters can improve the write rate. However, the CSV format is not recommended for the following Doris versions: 1.2, 2.0.x (x < 14), 2.1.x (x < 6), and 3.0.x (x < 1). Open-source issues may cause write exceptions for special characters if the CSV format is used.

sink.properties.Content-Encoding

string

-

-

Compression format of the HTTP header message body. Currently, only CSV files can be compressed, and the .gzip format is supported.

sink.properties.compress_type

string

-

-

File compression format. Currently, only CSV files can be compressed. The .gz, .lzo, .bz2, .lz4, .lzop, and .deflate compression formats are supported.

doris.sink.flush.tasks

int

1

-

Number of concurrent flushes of a single TaskManager. You can increase the value of this parameter to improve the write rate when there are sufficient resources.

sink.batch.interval

string

1s

h/min/s

Interval at which an asynchronous thread writes data. You can increase the value of this parameter to reduce the database I/O if there is a large amount of data at the source. For example, you can increase the value to 30s.

sink.batch.size

int

20000

-

Maximum number of rows that can be written (inserted, updated, or deleted) at a time. You can increase the value of this parameter to reduce the database I/O if there is a large amount of data at the source. For example, you can increase the value to 50000.

sink.batch.bytes

int

10485760

bytes

Maximum number of bytes that can be written (inserted, updated, or deleted) at a time. You can increase the value of this parameter to reduce the database I/O if there is a large amount of data at the source. For example, you can increase the value to 50485760.