Help Center/ DataArts Studio/ User Guide/ DataArts Migration (Real-Time Jobs)/ Tutorials/ Configuring a Job for Synchronizing Data from Oracle to DMS for Kafka
Updated on 2025-08-08 GMT+08:00

Configuring a Job for Synchronizing Data from Oracle to DMS for Kafka

Supported Source and Destination Database Versions

Table 1 Supported database versions

Source Database

Destination Database

Oracle database (versions 10, 11, 12, and 19)

Kafka cluster (2.7 and 3.x)

Database Account Permissions

Before you use DataArts Migration for data synchronization, ensure that the source and destination database accounts meet the requirements in the following table. The required account permissions vary depending on the synchronization task type.

Table 2 Database account permissions

Type

Required Permissions

Source database connection account

The account must have the permissions to archive logs, query tables, and parse logs of the Oracle database. For details about how to grant the permissions, see How Do I Grant the Log Archiving, Query, and Parsing Permissions of an Oracle Data Source?

Destination database connection account

The MRS user must have the read and write permissions on corresponding Kafka topics, that is, the user must belong to the kafka/kafkaadmin/kafkasuperuser user group.

NOTE:

A common Kafka user can access a topic only after being granted the read and write permissions on the topic by the Kafka administrator.

Supported Synchronization Objects

The following table lists the objects that can be synchronized using different links in DataArts Migration.

Table 3 Synchronization objects

Type

Note

Synchronization objects

  • The following DML operations can be synchronized: INSERT, UPDATE, and DELETE.
  • The DDL operation of adding columns can be synchronized.
  • Only primary key tables can be synchronized.
  • Views, foreign keys, stored procedures, triggers, functions, events, virtual columns, unique constraints, and unique indexes cannot be synchronized.
  • Table structures, common indexes, constraints (primary key, null, and non-null), and comments can be synchronized during automatic table creation.

Important Notes

In addition to the constraints on supported data sources and versions, connection account permissions, and synchronization objects, you also need to pay attention to the notes in the following table.

Table 4 Important notes

Type

Restriction

Database

The names of the source databases, tables, and fields cannot contain periods (.), hyphens (-), or non-ASCII characters. You are advised to use common characters to avoid a failure.

Usage

General:

  • During real-time synchronization, the IP addresses, ports, accounts, and passwords cannot be changed.
  • The archived logs of the Oracle database should be retained for more than three days.
  • resetlogs operations cannot be performed on the source Oracle database. Otherwise, data cannot be synchronized, and the job cannot be recovered.
  • The username (schema name) of the source Oracle database cannot be changed, neither by modifying the USER$ dictionary table in versions earlier than 11.2.0.2 nor by running the ALTER USER username RENAME TO new_username command in 11.2.0.2 and later versions.
  • If the source is an Oracle database, CLOB, NCLOB, and BLOB data cannot be migrated.
  • Oracle RAC clusters cannot be the source.
  • If the source is an Oracle database, it can connect to the standby Oracle database of a single instance but cannot connect to a standby database in an RAC cluster. Only archive logs can be read from the standby database, and standby logs cannot be read from the standby database. When connecting to the standby database, you are advised to configure scheduled archiving to reduce the data synchronization latency.
  • It is recommended that the network bandwidth be greater than 100 Mbit/s.

Full synchronization phase:

During task startup and full data synchronization, do not perform DDL operations on the source database. Otherwise, the task may fail.

Incremental synchronization phase:

  • DML operations INSERT, UPDATE, and DELETE can be synchronized.
  • The DDL operation of adding columns can be synchronized.
  • Mixed partitioned tables are not supported. When data in the external partition of a mixed partitioned table changes, no DML log is generated. As a result, the change information cannot be obtained during incremental data synchronization, which may cause data inconsistency.
  • The table name and column name can contain a maximum of 30 characters. Oracle LogMiner is used to read Oracle logs. The table name and column name can contain a maximum of 30 characters. For details, see Using LogMiner to Analyze Redo Log Files.
  • During incremental startup, select a time point. Ensure that the Oracle database is in the same time zone as the host where the database is located to ensure the accuracy of the incremental time point.
  • Distributed transactions (XA transactions) and PARALLEL DML on Oracle cannot be incrementally synchronized.
  • During incremental synchronization, Oracle extended characters are not supported. The standard character set cannot parse Oracle customized extended characters.
  • During incremental synchronization, triggers cannot be synchronized or migrated. You must disable the triggers on the destination Oracle database.
  • Data with foreign key constraints cannot be incrementally synchronized or migrated.
  • Data imported to the source database using Oracle Data Pump cannot be incrementally synchronized or migrated.

Troubleshooting:

If any problem occurs during task creation, startup, full synchronization, incremental synchronization, or completion, rectify the fault by referring to FAQs.

Other

  • During automatic table creation, the length of the char, varchar, nvarchar, enum, and set characters in the source database is automatically increased by byte in the destination GaussDB(DWS) database.
  • For a full and incremental job or an incremental job that migrates data from an Oracle database, if you want to synchronize tables in the PDB database, you need to enter the username and password of the CDB database in the Oracle connection. This is because Oracle logs are stored in the CDB database and Oracle LogMiner can run only in the CDB database.
  • To delete a table using DDL, run the drop table test_table_name purage command.

    Deleting a table in the Oracle database is a high-risk operation by default. When the drop table test_table_name command is executed, the Oracle database converts the command to rename table test_table_name as xxxxx. That is, the table is renamed as a table to be processed in the Oracle temporary tablespace. The original table is not deleted, and DataArts Migration ignores the syntax by default. The data deletion statement drop table test_table_name purage of the Oracle database completely deletes the table. DataArts Migration automatically identifies and delivers the deleted table to the downstream.

  • Full and incremental synchronization does not support hidden columns (UNUESD, INVISIBIE).
  • During incremental synchronization from the PDB database, all PDBs must be enabled due to restrictions of Oracle LogMiner.
  • Tables with no primary keys are not supported.
  • Data control language (DCL) operations are not supported.
  • Consecutive RENAME TABLE operations cannot be synchronized or migrated. Otherwise, the task may fail.
  • Global temporary tables cannot be synchronized or migrated (the task is running properly).
  • Tables that contain default value functions cannot be synchronized or migrated. Otherwise, data will be inconsistent.
  • Tables with default values that contain expressions cannot be synchronized or migrated.
  • Foreign tables cannot be synchronized or migrated.
  • Compute columns and encrypted columns cannot be synchronized or migrated.
  • Virtual private databases (VPDs) cannot be synchronized or migrated.
  • Jobs created by dbms_scheduler and dbms_job cannot be synchronized or migrated.
  • Schema name changes cannot be synchronized or migrated.
  • Nested tables cannot be synchronized or migrated. Otherwise, an error will occur.
  • Materialized views cannot be synchronized or migrated.
  • DDL operations with attribute names that contain keywords or special characters cannot be synchronized or migrated.
  • ROWID change operations (such as split partition, table move, table shrink, and move partition key) are not supported. Otherwise, data inconsistency or task failures may occur.
  • The Secure Sockets Layer (SSL) encryption transmission mode is not supported.
  • The Oracle Label Security mode is not supported.

Procedure

This section uses real-time synchronization from Oracle to DMS for Kafka as an example to describe how to configure a real-time data migration job. Before that, ensure that you have read the instructions described in Check Before Use and completed all the preparations.

  1. Create a real-time migration job by following the instructions in Creating a Real-Time Migration Job and go to the job configuration page.
  2. Select the data connection type. Select Oracle for Source and DMS for Kafka for Destination.

    Figure 1 Selecting the data connection type

  3. Select a job type. The default migration type is Real-time. The migration scenario is Entire DB.

    Figure 2 Setting the migration job type

    For details about synchronization scenarios, see Synchronization Scenarios.

  4. Configure network resources. Select the created Oracle and DMS for Kafka data connections and the migration resource group for which the network connection has been configured.

    Figure 3 Selecting data connections and a migration resource group

    If no data connection is available, click Create to go to the Manage Data Connections page of the Management Center console and click Create Data Connection to create a connection. For details, see Configuring DataArts Studio Data Connection Parameters.

    If no migration resource group is available, click Create to create one. For details, see Buying a DataArts Migration Resource Group Incremental Package.

  5. Check the network connectivity. After the data connections and migration resource group are configured, perform the following operations to check the connectivity between the data sources and the migration resource group.

  6. Configure source parameters.

    Select the databases and tables to be synchronized based on the following table.

    Table 5 Selecting the databases and tables to be synchronized

    Synchronization Scenario

    Configuration Method

    Entire DB

    Select the Oracle databases and tables to be migrated.
    Figure 4 Selecting databases and tables

    Both databases and tables can be customized. You can select one database and one table, or multiple databases and tables.

  7. Configure destination parameters.

    Figure 5 Kafka destination parameters
    • Destination Topic Name Rule

      Configure the rule for mapping source Oracle database tables to destination Kafka topics.

    • Kafka Partition Synchronization Policy

      The following three policies are available for synchronizing source data to specified partitions of destination Kafka topics:

      • To partition 0
      • To different partitions based on the hash values of database names/table names
      • To different partitions based on the hash values of table primary keys

        If the source has no primary key, data is synchronized to partition 0 at the destination by default.

    • Partitions of New Topic

      If the destination Kafka does not have the corresponding topic, the topic automatically created by DataArts Migration has three partitions.

    • Destination Kafka Attributes

      You can add Kafka configuration items with the properties. prefix. The job automatically removes the prefix and transfers configuration items to the underlying Kafka client. For details about the parameters, see the configuration descriptions in Kafka documentation.

  8. Update the mapping between the source table and destination table and check whether the mapping is correct.

    Figure 6 Mapping between source and destination tables

  9. Configure task parameters.

    Table 6 Task parameters

    Parameter

    Description

    Default Value

    Execution Memory

    Memory allocated for job execution, which automatically changes with the number of CPU cores

    8GB

    CPU Cores

    Value range: 2 to 32

    For each CPU core added, 4 GB execution memory and one concurrency are automatically added.

    2

    Maximum Concurrent Requests

    Maximum number of jobs that can be concurrently executed. This parameter does not need to be configured and automatically changes with the number of CPU cores.

    1

    Auto Retry

    Whether to enable automatic retry upon a job failure

    No

    Maximum Retries

    This parameter is displayed when Auto Retry is set to Yes.

    1

    Retry Interval (Seconds)

    This parameter is displayed when Auto Retry is set to Yes.

    120s

    Write Dirty Data

    Whether to record dirty data. By default, dirty data is not recorded. If there is a large amount of dirty data, the synchronization speed of the task is affected.

    • No: Dirty data is not recorded. This is the default value.

      Dirty data is not allowed. If dirty data is generated during the synchronization, the task fails and exits.

    • Yes: Dirty data is allowed, that is, dirty data does not affect task execution.
      When dirty data is allowed and its threshold is set:
      • If the generated dirty data is within the threshold, the synchronization task ignores the dirty data (that is, the dirty data is not written to the destination) and is executed normally.
      • If the generated dirty data exceeds the threshold, the synchronization task fails and exits.
        NOTE:

        Criteria for determining dirty data: Dirty data is meaningless to services, is in an invalid format, or is generated when the synchronization task encounters an error. If an exception occurs when a piece of data is written to the destination, this piece of data is dirty data. Therefore, data that fails to be written is classified as dirty data.

        For example, if data of the VARCHAR type at the source is written to a destination column of the INT type, dirty data cannot be written to the migration destination due to improper conversion. When configuring a synchronization task, you can configure whether to write dirty data during the synchronization and configure the number of dirty data records (maximum number of error records allowed in a single partition) to ensure task running. That is, when the number of dirty data records exceeds the threshold, the task fails and exits.

    No

    Dirty Data Policy

    This parameter is displayed when Write Dirty Data is set to Yes. The following policies are supported:

    • Do not archive: Dirty data is only recorded in job logs, but not stored.
    • Archive to OBS: Dirty data is stored in OBS and printed in job logs.

    Do not archive

    Write Dirty Data Link

    This parameter is displayed when Dirty Data Policy is set to Archive to OBS.

    Only links to OBS support dirty data writes.

    -

    Dirty Data Directory

    OBS directory to which dirty data will be written

    -

    Dirty Data Threshold

    This parameter is only displayed when Write Dirty Data is set to Yes.

    You can set the dirty data threshold as required.

    NOTE:
    • The dirty data threshold takes effect for each concurrency. For example, if the threshold is 100 and the concurrency is 3, the maximum number of dirty data records allowed by the job is 300.
    • Value -1 indicates that the number of dirty data records is not limited.

    100

    Custom attributes

    You can add custom attributes to modify some job parameters and enable some advanced functions. For details, see Job Performance Optimization.

    -

  10. Submit and run the job.

    After configuring the job, click Submit in the upper left corner to submit the job.

    Figure 7 Submitting the job

    After submitting the job, click Start on the job development page. In the displayed dialog box, set required parameters and click OK.

    Figure 8 Starting the job
    Table 7 Parameters for starting the job

    Parameter

    Description

    Synchronous Mode

    • Incremental Synchronization: Incremental data synchronization starts from a specified time point.
    • Full and incremental synchronization: All data is synchronized first, and then incremental data is synchronized in real time.

    Time

    This parameter must be set for incremental synchronization, and it specifies the start time of incremental synchronization.

    NOTE:

    If you set a time that is earlier than the earliest binlog time, the latest log time is used.

  11. Monitor the job.

    On the job development page, click Monitor to go to the Job Monitoring page. You can view the status and log of the job, and configure alarm rules for the job. For details, see Real-Time Migration Job O&M.

    Figure 9 Monitoring the job

Performance Optimization

If the synchronization speed is too slow, rectify the fault by referring to Job Performance Optimization.