Help Center/ DataArts Studio/ User Guide/ DataArts Migration (Real-Time Jobs)/ Tutorials/ Configuring a Job for Synchronizing Data from MySQL to MRS Doris
Updated on 2025-09-18 GMT+08:00

Configuring a Job for Synchronizing Data from MySQL to MRS Doris

Supported Source and Destination Database Versions

Table 1 Supported database versions

Source Database

Destination Database

MySQL database (5.6, 5.7, and 8.x)

Doris (Doris1.2 and Doris2.0)

Database Account Permissions

Before you use DataArts Migration for data synchronization, ensure that the source and destination database accounts meet the requirements in the following table. The required account permissions vary depending on the synchronization task type.

Table 2 Database account permissions

Type

Required Permissions

Source database connection account

The source database account must have the following minimal permissions required for running SQL statements: SELECT, SHOW DATABASES, REPLICATION SLAVE and REPLICATION CLIENT.

GRANT SELECT, SHOW DATABASES, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'Username'@'%';

Destination database connection account

The account must have the following permissions for each table in the destination database: LOAD, SELECT, CREATE, ALTER, and DROP.

  • You are advised to create independent database accounts for DataArts Migration task connections to prevent task failures caused by password modification.
  • After changing the account passwords for the source or destination databases, modify the connection information in Management Center as soon as possible to prevent automatic retries after a task failure. Automatic retries will lock the database accounts.

Supported Synchronization Objects

The following table lists the objects that can be synchronized using different links in DataArts Migration.

Table 3 Synchronization objects

Type

Note

Synchronization objects

  • DML operations INSERT, UPDATE, and DELETE can be synchronized.
  • Views, foreign keys, stored procedures, triggers, functions, events, virtual columns, unique constraints, and unique indexes cannot be synchronized.
  • Unlogged tables, temporary tables, system schemas, and system tables cannot be synchronized.
  • DDL operations can be performed to add and delete columns only in single-concurrency scenarios. Do not perform other DDL operations on the source database. Otherwise, the job may be abnormal.
  • Auto table creation supports synchronization of table structures and constraints (NULL and NOT NULL constraints), but not default values.

Notes

In addition to the constraints on supported data sources and versions, connection account permissions, and synchronization objects, you also need to pay attention to the notes in the following table.

Table 4 Notes

Type

Constraint

Database

  • The names of the source databases, tables, and fields cannot contain non-ASCII characters or the following characters: .<'>/\" (You are advised to use common characters to avoid a failure.)
  • Object names in the destination database must meet the following requirements:
    • The table name can contain a maximum of 64 characters and must start with a letter. Only letters, digits, underscores (_), and hyphens (-) are allowed.
    • The field name can contain a maximum of 255 characters. You are advised to use common characters. Do not use special characters such as Chinese characters.

Usage

General:

  • During real-time synchronization, the IP addresses, ports, accounts, and passwords cannot be changed.
  • The source database cannot be restored.
  • It is recommended that MySQL binlogs be retained for more than three days. Binlogs cannot be forcibly cleared.
  • When a job is restored after an exception or suspension, the recorded binlog location may expire. As a result, the job fails to be restored. It is important to monitor the duration of job exceptions or suspensions and the binlog retention period.
  • During real-time synchronization, the source MySQL database cannot be upgraded across major versions. Otherwise, data may become inconsistent or the synchronization task may fail (data, table structures, and keywords may cause compatibility changes after the cross-version upgrade). You are advised to create a synchronization task again if the source MySQL database is upgraded across major versions.
  • Doris tables using aggregation models cannot be automatically created. They can only be manually created.
  • MRS Doris supports HTTP and HTTPS. Cloud table Doris supports only HTTP.
  • Only the StreamLoad mode is supported for import.

Full synchronization phase:

  • During task startup and full data synchronization, do not perform DDL operations on the source database. Otherwise, the task may fail.

Incremental synchronization phase:

  • Do not change the primary key or unique key (if the primary key does not exist) of the source database table. Otherwise, incremental data may be inconsistent or the task may be abnormal.

Troubleshooting:

  • If any problem occurs during task creation, startup, full synchronization, incremental synchronization, or completion, rectify the fault by referring to FAQs.

Other

  • Doris cannot use a string as the primary key, even if as one of the fields in a composite primary key.
  • DDL operations cannot be synchronized in multi-concurrency scenarios.

Procedure

This section uses real-time synchronization from RDS for MySQL to Doris as an example to describe how to configure a real-time data migration job. Before that, ensure that you have read the instructions described in Performing a Check Before Using a Real-Time Job and completed all the preparations.

  1. Select the data connection type. Select MySQL for Source and Doris for Destination.

    Figure 1 Selecting the data connection type

  2. Select a job type. The default migration type is Real-time. The migration scenarios include Entire DB and Database/Table partition.

    Figure 2 Setting the migration job type

    For details about synchronization scenarios, see Synchronization Scenarios.

  3. Configure network resources. Select the created MySQL and Doris data connections and the migration resource group for which the network connection has been configured.

    Figure 3 Selecting data connections and a migration resource group

    If no data connection is available, click Create to go to the Manage Data Connections page of the Management Center console and click Create Data Connection to create a connection. For details, see Configuring DataArts Studio Data Connection Parameters.

    If no migration resource group is available, click Create to create one. For details, see Buying a DataArts Migration Resource Group Incremental Package.

  4. Configure source parameters.

    Select the databases and tables to be synchronized based on the following table.

    Table 5 Selecting the databases and tables to be synchronized

    Synchronization Scenario

    Configuration Method

    Entire DB

    • Select synchronization objects.
      • Table-level synchronization: Synchronize multiple tables in multiple databases of a MySQL instance.
      • Database-level synchronization: Synchronize all tables in multiple databases of a MySQL instance.
    • Select the MySQL databases and tables to be migrated.
      Figure 4 Selecting databases and tables

    Both databases and tables can be customized. You can select one database and one table, or multiple databases and tables.

    Database/Table shard

    Add a logical table.

    • Logical Table Name: Enter the name of the table to be written to Doris.

    • Source Database Filter: You can enter a regular expression to filter all the database shards to be written to the destination Doris aggregation table.
    • Source Table Filter: You can enter a regular expression to filter all the table shards in the source database shard to be written to the destination Doris aggregation table.
      Figure 5 Adding a logical table

    You can click Preview in the Operation column to preview an added logical table. When you preview a logical table, the more the source tables, the longer the waiting time.

    Figure 6 Previewing a logical table

  5. Configure destination parameters.

    • Set Database and Table Matching Policy.

      For details about the matching policy between source and destination databases and tables in each synchronization scenario, see the following table.

      Table 6 Database and table matching policy

      Synchronization Scenario

      Configuration Method

      Entire DB

      • Database Matching Policy
        • Same name as the source database: Data will be synchronized to the Doris database with the same name as the source MySQL database.
        • Custom: Data will be synchronized to the Doris database you specify.
      • Table Matching Policy
        • Same name as the source table: Data will be synchronized to the Doris table with the same name as the source MySQL table.
        • Custom: Data will be synchronized to the Doris table you specify.
          Figure 7 Database and table matching policy in the entire database migration scenario
          NOTE:

          When you customize a matching policy, you can use built-in variables #{source_db_name} and #{source_table_name} to identify the source database name and table name. The table matching policy must contain #{source_table_name}.

      Database/Table shard

      • Destination Database Name: Data will be synchronized to the specified Doris database.
      • Table Matching Policy: By default, the value is the same as the name of the logical table entered in the source configuration.
        Figure 8 Database and table matching policy in the sharding scenario
    • Set Doris parameters.

      You can configure the advanced parameters in the following table to enable some advanced functions.

      Table 7 Doris advanced parameters

      Parameter

      Type

      Default Value

      Unit

      Description

      doris.request.connect.timeout.ms

      int

      30000

      ms

      Doris connection timeout interval

      doris.request.read.timeout.ms

      int

      30000

      ms

      Doris read timeout interval

      doris.request.retries

      int

      3

      -

      Number of retries upon a Doris request failure

      sink.max-retries

      int

      3

      -

      Maximum number of retries upon a data writing failure

      sink.batch.interval

      string

      1s

      h/min/s

      Interval at which an asynchronous thread writes data

      sink.enable-delete

      boolean

      true

      -

      Whether to enable the deletion function. If this function is disabled, data deleted from the source will not be deleted from the destination.

      sink.batch.size

      int

      20000

      -

      Maximum number of rows that can be written (inserted, updated, or deleted) at a time

      sink.batch.bytes

      long

      10485760

      bytes

      Maximum number of bytes that can be written (inserted, updated, or deleted) at a time

      logical.delete.enabled

      boolean

      false

      -

      Whether to enable logical deletion. If this function is enabled, the destination must contain the deletion flag column. When data is deleted from the source database, the corresponding data in the destination database will not be deleted. Instead, the deletion flag column is set to true, indicating that the data is not contained at the source.

      logical.delete.column

      string

      logical_is_deleted

      -

      Name of the logical deletion column. The default value is logical_is_deleted. You can customize the value.

      sink.keyby.mode

      string

      pk

      -

      Partitioning mode when concurrent writes are performed on Doris. Default value: pk (primary key). If the source is Kafka, and it does not have a primary key, select table to partition data by table name.

      doris.sink.flush.tasks

      int

      1

      -

      Number of concurrent flushes of a single TaskManager

      sink.properties.format

      string

      json

      -

      Data format used by Stream Load. The value can be json or csv.

      sink.properties.Content-Encoding

      string

      -

      -

      Compression format of the HTTP header message body. Currently, only CSV files can be compressed, and the .gzip format is supported.

      sink.properties.compress_type

      string

      -

      -

      File compression format. Currently, only CSV files can be compressed. The .gz, .lzo, .bz2, .lz4, .lzop, and .deflate compression formats are supported.

  6. Refresh and check the mapping between the source and destination tables. In addition, you can add additional fields and use the automatic table creation capability to create tables in the destination Doris database.

    Figure 9 Mapping between source and destination tables
    • Edit additional fields: Click Additional Field in the Operation column to add custom fields to the destination Doris table. For a new table, you can add additional fields to the existing fields in the source table. You can customize the field name, select the field type, and enter the field value.
      • Field Name: name of the new field in the destination Doris table
      • Field Type: type of the new field in the destination Doris table
      • Field Value: value source of the new field in the destination Doris table
        Table 8 Additional field value obtaining mode

        Type

        Example

        Constant

        Any character

        Built-in variable

        • Source host IP address: source.host
        • Source schema name: source.schema
        • Source table name: source.table
        • Destination schema name: target.schema
        • Destination table name: target.table
        • Source data change time: source.event.timestamp

        Source table field

        Any field in the source table

        Do not change the name of the source table field when the job is running. Otherwise, the job may be abnormal.

        UDF

        • substring(#col, pos[, len]): obtains a substring of a specified length from the source column name. The substring range is [pos, pos+len).
        • date_format(#col, time_format[, src_tz, dst_tz]): formats the source column name based on a specified time format. The time zone can be converted using src_tz and dst_tz.
        • now([tz]): obtains the current time in a specified time zone.
        • if(cond_exp, str1, str2): returns str1 if the condition expression cond_exp is met and returns str2 otherwise.
        • concat(#col[, #str, ...]): concatenates multiple parameters, including source columns and strings.
        • from_unixtime(#col[, time_format]): formats a Unix timestamp based on a specified time format.
        • unix_timestamp(#col[, precision, time_format]): converts a time into a Unix timestamp of a specified time format and precision.
    • Automatic table creation: Click Auto Table Creation to automatically create tables in the destination database based on the configured mapping policy. After the tables are created, Existing table is displayed for them.

      • DataArts Migration supports only automatic table creation. You need to manually create databases and schemas at the destination before using this function.
      • For details about the field type mapping for automatic table creation, see Field Type Mapping.

  7. Submit and run the job.

    After configuring the job, click Submit in the upper left corner to submit the job.

    Figure 10 Submitting the job

    After submitting the job, click Start in the upper left corner. In the displayed dialog box, set required parameters and click OK.

    Figure 11 Starting the job
    Table 9 Parameters for starting the job

    Parameter

    Description

    Synchronous Mode

    • Incremental Synchronization: Incremental data synchronization starts from a specified time point.
    • Full and incremental synchronization: All data is synchronized first, and then incremental data is synchronized in real time.

    Time

    This parameter must be set for incremental synchronization, and it specifies the start time of incremental synchronization.

    NOTE:

    If you set a time that is earlier than the earliest binlog time, the latest log time is used.