Help Center/ DataArts Studio/ User Guide/ DataArts Migration (Real-Time Jobs)/ Tutorials/ Configuring a Job for Synchronizing Data from PostgreSQL to
Updated on 2025-08-05 GMT+08:00

Configuring a Job for Synchronizing Data from PostgreSQL to

Supported Source and Destination Database Versions

Table 1 Supported database versions

Source Database

Destination Database

PostgreSQL database of version 9.4, 9.5, 9.6, 10, 11, 12, 13, 14, 15, or 16

Kafka cluster (2.7 and 3.x)

Database Account Permissions

Before you use DataArts Migration for data synchronization, ensure that the source and destination database accounts meet the requirements in the following table. The required account permissions vary depending on the synchronization task type.

Table 2 Database account permissions

Type

Required Permissions

Source database connection account

Database CONNECT permission, schema USAGE permission, table SELECT permission, sequence SELECT permission, and REPLICATION connection permission

NOTE:

To add the permission to create replication connections, perform the following steps:

  • Add host replication <src_user_name> <drs_instance_ip>/32 <Authentication mode> before all configurations in the pg_hba.conf file of the source database. For details about the authentication mode, see pg_hba.conf. Common authentication modes include scram-sha-256.
  • Run select pg_reload_conf(); in the source database as user SUPERUSER, or restart the DB instance to apply the changes.

Destination database connection account

The MRS user must have the read and write permissions on corresponding Kafka topics, that is, the user must belong to the kafka/kafkaadmin/kafkasuperuser user group.

NOTE:

A common Kafka user can access a topic only after being granted the read and write permissions on the topic by the Kafka administrator.

Supported Synchronization Objects

The following table lists the objects that can be synchronized using different links in DataArts Migration.

Table 3 Synchronization objects

Type

Note

Synchronization objects

  • DML and DDL can be synchronized.
  • Only primary key tables can be synchronized.
  • Views, foreign keys, stored procedures, triggers, functions, events, virtual columns, unique constraints, and unique indexes cannot be synchronized.
  • Unlogged tables, temporary tables, system schemas, and system tables cannot be synchronized.
  • Table structures, common indexes, constraints (primary key, null, and non-null), and comments cannot be synchronized during automatic table creation.

Important Notes

In addition to the constraints on supported data sources and versions, connection account permissions, and synchronization objects, you also need to pay attention to the notes in the following table.

Table 4 Important notes

Type

Restriction

Database

  • The database name cannot contain +"%'\<>. The schema name and table name cannot contain ".'<>. The column name cannot contain " or '. The column name cannot be CTID, XMIN, CMIN, XMAX, CMAX, TABLEOID, XC_NODE_ID, TID, or any other field forbidden by GaussDB(DWS). You are advised to use common characters to avoid task failures.
  • The name of an object in the destination database must contain 1 to 63 characters, start with a letter or underscore (_), and can contain letters, digits, underscores (_), and dollar signs ($).
  • The partition table trigger of the source database cannot be set to disable.
  • If incremental synchronization is required, the pg_hba.conf file of the source database contains the following configuration:
    host replication all 0.0.0.0/0 md5

Usage

General:

  • During real-time synchronization, the IP addresses, ports, accounts, and passwords cannot be changed.
  • The WAL logs of the PostgreSQL database should be retained for more than three days.
  • Source tables must be published in dbz_publiction. Use "select * from pg_publication_tables;" to query the PUBLICATION of all tables and use "ALTER PUBLICATION name ADD TABLE [ ONLY ] table_name [ * ] [, ...]" to add the tables to the publish list.

Full synchronization phase:

During task startup and full data synchronization, do not perform DDL operations on the source database. Otherwise, the task may fail.

Incremental synchronization phase:

  • Do not change the primary key or unique key (if the primary key does not exist) of the source database table. Otherwise, incremental data may be inconsistent or the task may fail.
  • Do not modify the replica identity attribute of tables in the source database. Otherwise, incremental data may be inconsistent or the task may fail.
  • When the number of replication slots of the PostgreSQL data source has reached the upper limit, new jobs cannot be executed. You can increase the upper limit of the number of replication slots by setting max_replication_slots or manually delete replication slots (automatic deletion is not supported). For details about how to delete replication slots, see

Troubleshooting:

If any problem occurs during task creation, startup, full synchronization, incremental synchronization, or completion, rectify the fault by referring to .

Other

  • Before starting the job, ensure that no long transaction is started in the source database. If a long transaction is started in the source database, the creation of the logical replication slot will be blocked. As a result, the task fails.
  • After the job is started, the active/standby switchover is not allowed for the source database.

Procedure

This section uses real-time synchronization from PostgreSQL to MRS Kafka as an example to describe how to configure a real-time data migration job. Before that, ensure that you have read the instructions described in Check Before Use and completed all the preparations.

  1. Create a real-time migration job by following the instructions in Creating a Real-Time Migration Job and go to the job configuration page.
  2. Select the data connection type. Select PostgreSQL for Source and MRS Kafka for Destination.

    Figure 1 Selecting the data connection type

  3. Select a job type. The default migration type is Real-time. The migration scenario is Entire DB.

    Figure 2 Setting the migration job type

    For details about synchronization scenarios, see Synchronization Scenarios.

  4. Configure network resources. Select the created PostgreSQL and MRS Kafka data connections and the migration resource group for which the network connection has been configured.

    Figure 3 Selecting data connections and a migration resource group

    If no data connection is available, click Create to go to the Manage Data Connections page of the Management Center console and click Create Data Connection to create a connection. For details, see Configuring DataArts Studio Data Connection Parameters.

  5. Configure source parameters.

    Select the databases and tables to be synchronized based on the following table.

    Table 5 Selecting the databases and tables to be synchronized

    Synchronization Scenario

    Configuration Method

    Entire DB

    Select the PostgreSQL databases and tables to be migrated.
    Figure 4 Selecting databases and tables

    Both databases and tables can be customized. You can select one database and one table, or multiple databases and tables.

  6. Configure destination parameters.

    Figure 5 Kafka destination parameters
    • Destination Topic Name Rule

      Configure the rule for mapping source database tables to destination Kafka topics. You can specify a fixed topic or use built-in variables to synchronize data from source tables to destination topics.

      The following built-in variables are available:

      • Source database name: #{source_db_name}
      • Source table name: #{source_table_name}
    • Kafka Partition Synchronization Policy

      The following three policies are available for synchronizing source data to specified partitions of destination Kafka topics:

      • To Partition 0
      • To different partitions based on the hash values of database names/table names
      • To different partitions based on the hash values of table primary keys

        If the source has no primary key, data is synchronized to partition 0 at the destination by default.

    • Partitions of New Topic

      If the destination Kafka does not have the corresponding topic, the topic automatically created by DataArts Migration has three partitions.

    • Destination Kafka Attributes

      You can set Kafka attributes and add the properties. prefix. The job will automatically remove the prefix and transfer the attributes to the Kafka client. For details about the parameters, see the configuration descriptions in the Kafka documentation.

    • Advanced settings

      You can add custom attributes in the Configure Task area to enable some advanced functions. For details about the parameters, see the following table.

      Figure 6 Adding custom attributes
      Table 6 Advanced parameters of the job for migrating data from PostgreSQL to Kafka

      Parameter

      Type

      Default Value

      Unit

      Description

      sink.delivery-guarantee

      string

      at-least-once

      N/A

      Semantic assurance when Flink writes data to Kafka

      • at-least-once: At a checkpoint, the system waits for all data in the Kafka buffer to be confirmed by the Kafka producer. No message will be lost due to events that occur on the Kafka broker. However, duplicate messages may be generated when Flink is restarted because Flink processes old data again.
      • exactly-once: In this mode, the Kafka sink writes all data through the transactions submitted at a checkpoint. Therefore, if the consumer reads only submitted data, duplicate data will not be generated when Flink is restarted. However, data is visible only when a checkpoint is complete, so you need to adjust the checkpoint interval as needed.

  7. Update the mapping between the source table and destination table and check whether the mapping is correct.

    Figure 7 Mapping between source and destination tables

  8. Configure task parameters.

    Table 7 Task parameters

    Parameter

    Description

    Default Value

    Execution Memory

    Memory allocated for job execution, which automatically changes with the number of CPU cores

    8GB

    CPU Cores

    Value range: 2 to 32

    For each CPU core added, 4 GB execution memory and one concurrency are automatically added.

    2

    Maximum Concurrent Requests

    Maximum number of jobs that can be concurrently executed. This parameter does not need to be configured and automatically changes with the number of CPU cores.

    1

    Adding custom attributes

    You can add custom attributes to modify some job parameters and enable some advanced functions. For details, see Job Performance Optimization.

    N/A

  9. Submit and run the job.

    After configuring the job, click Submit in the upper left corner to submit the job.

    Figure 8 Submitting the job

    After submitting the job, click Start on the job development page. In the displayed dialog box, set required parameters and click OK.

    Figure 9 Starting the job
    Table 8 Parameters for starting the job

    Parameter

    Description

    Synchronous Mode

    • Incremental Synchronization: Incremental data synchronization starts from a specified time point.
    • Full and incremental synchronization: All data is synchronized first, and then incremental data is synchronized in real time.

  1. Monitor the job.

    On the job development page, click Monitor to go to the Job Monitoring page. You can view the status and log of the job, and configure alarm rules for the job. For details, see Real-Time Migration Job O&M.

    Figure 10 Monitoring the job

Performance Optimization

If the synchronization speed is too slow, rectify the fault by referring to Job Performance Optimization.