Help Center/ DataArts Studio/ User Guide/ DataArts Migration (Real-Time Jobs)/ Tutorials/ Configuring a Job for Synchronizing Data from PostgreSQL to DWS
Updated on 2025-09-09 GMT+08:00

Configuring a Job for Synchronizing Data from PostgreSQL to DWS

Supported Source and Destination Database Versions

Table 1 Supported database versions

Source Database

Destination Database

PostgreSQL database of version 9.4, 9.5, 9.6, 10, 11, 12, 13, 14, 15, or 16

DWS cluster of version 8.1.3, 8.2.0, or a later version, except DWS 3.0

Database Account Permissions

Before you use DataArts Migration for data synchronization, ensure that the source and destination database accounts meet the requirements in the following table. The required account permissions vary depending on the synchronization task type.

Table 2 Database account permissions

Type

Required Permissions

Source database account

  • Database CONNECT permission, schema USAGE permission, table SELECT permission, sequence SELECT permission, REPLICATION connection permission, and the permission to query pg_ls_waldir() functions. For details, see How Do I Add Additional Permissions for PostgreSQL and GaussDB Data Sources?
    NOTE:

    To add the permission to create replication connections, perform the following steps:

    • Add host replication <src_user_name> <drs_instance_ip>/32 <Authentication mode> before all configurations in the pg_hba.conf file of the source database. For details about the authentication mode, see pg_hba.conf. Common authentication modes include scram-sha-256.
    • Run select pg_reload_conf(); in the source database as user SUPERUSER, or restart the DB instance to apply the changes.

Destination database account

The destination database account must have the following permissions for each table in the database: INSERT, SELECT, UPDATE, DELETE, CONNECT, and CREATE.

  • You are advised to create independent database accounts for DataArts Migration task connections to prevent task failures caused by password modification.
  • After changing the account passwords for the source or destination databases, modify the connection information in Management Center as soon as possible to prevent automatic retries after a task failure. Automatic retries will lock the database accounts.

Supported Synchronization Objects

The following table lists the objects that can be synchronized using different links in DataArts Migration.

Table 3 Synchronization objects

Type

Note

Synchronization objects

  • DML operations INSERT, UPDATE, and DELETE can be synchronized.
  • DDL operations cannot be synchronized.
  • Only primary key tables can be synchronized.
  • Views, foreign keys, stored procedures, triggers, functions, events, virtual columns, unique constraints, and unique indexes cannot be synchronized.
  • Unlogged tables, temporary tables, system schemas, and system tables cannot be synchronized.
  • Table structures, common indexes, constraints (primary key, null, and non-null), and comments can be synchronized during automatic table creation.

Important Notes

In addition to the constraints on supported data sources and versions, connection account permissions, and synchronization objects, you also need to pay attention to the notes in the following table.

Table 4 Important notes

Type

Restriction

Database

  • The database name cannot contain +"%'\<>. The schema name and table name cannot contain ".'<>. The column name cannot contain " or '. The column name cannot be CTID, XMIN, CMIN, XMAX, CMAX, TABLEOID, XC_NODE_ID, TID, or any other field forbidden by DWS. You are advised to use common characters to avoid task failures.
  • The name of an object in the destination database must contain 1 to 63 characters, start with a letter or underscore (_), and can contain letters, digits, underscores (_), and dollar signs ($).
  • The partition table trigger of the source database cannot be set to disable.
  • If incremental synchronization is required, the pg_hba.conf file of the source database contains the following configuration:
    host replication all 0.0.0.0/0 md5

Usage

General:

  • During real-time synchronization, the IP addresses, ports, accounts, and passwords cannot be changed.
  • If a job is started for the first time, the start point cannot be specified. Logs of six hours are retained by default. That is, after the job is started for the first time and runs for x hours, you can stop the job and specify a start point in the previous x hours (x cannot be greater than the log retention period). You can set the log retention period through the slot.flush.delay.time parameter, which you can add by clicking Add Custom Attribute in Configure Task. The value of slot.flush.delay.time can be 1d, 1day, 2days, 1h, 1hour, 2hours, 1m, 1minute, or 2minutes.
  • By clicking Add Custom Attribute in Configure Task, you can add the custom.slot.name parameter to specify an inactive replication slot for incremental synchronization.
  • Only logical replication slots with the pgoutput plug-in are supported. The slot creation command is SELECT * FROM pg_create_logical_replication_slot('your_slot_name', 'pgoutput');.
  • Source tables must be published in dbz_publiction. Use "select * from pg_publication_tables;" to query the PUBLICATION of all tables and use "ALTER PUBLICATION name ADD TABLE [ ONLY ] table_name [ * ] [, ...]" to add the tables to the publish list.

Full synchronization phase:

During task startup and full data synchronization, do not perform DDL operations on the source database. Otherwise, the task may fail.

Incremental synchronization phase:

  • Do not change the primary key or unique key (if the primary key does not exist) of the source database table. Otherwise, incremental data may be inconsistent or the task may fail.
  • Do not modify the replica identity attribute of tables in the source database. Otherwise, incremental data may be inconsistent or the task may fail.
  • When the number of replication slots of the PostgreSQL data source has reached the upper limit, new jobs cannot be executed. You can increase the upper limit of the number of replication slots by setting max_replication_slots or manually delete replication slots (automatic deletion is not supported). For details about how to delete replication slots, see How Do I Manually Delete Replication Slots from a PostgreSQL Data Source?

Troubleshooting:

If any problem occurs during task creation, startup, full synchronization, incremental synchronization, or completion, rectify the fault by referring to FAQs.

Other

  • The block_size value of the destination database must be greater than that of the source database.
  • Before starting the job, ensure that no long transaction is started in the source database. If a long transaction is started in the source database, the creation of the logical replication slot will be blocked. As a result, the task fails.
  • After the job is started, the active/standby switchover is not allowed for the source database.
  • Tables in the destination database can contain more columns than those in the source database. The following failures must be avoided:
    • Assume that extra columns in the destination database cannot be null and have no default values. If newly inserted data records are synchronized from the source database to the destination database, the extra columns will become null, which does not meet the requirements of the destination database and will cause the task to fail.
    • Assume that extra columns in the destination database must be fixed at a default value and have a unique constraint. If newly inserted data records are synchronized from the source database to the destination database, the extra columns will contain default values. That does not meet the requirements of the destination database.
  • During automatic table creation, the length of the char, varchar, nvarchar, enum, and set characters in the source database is automatically increased by byte in the destination DWS database.

Procedure

This section uses real-time synchronization from PostgreSQL to DWS as an example to describe how to configure a real-time data migration job. Before that, ensure that you have read the instructions described in Performing a Check Before Using a Real-Time Job and completed all the preparations.

  1. Create a real-time migration job by following the instructions in Creating a Real-Time Migration Job and go to the job configuration page.
  2. Select the data connection type. Select PostgreSQL for Source and DWS for Destination.

    Figure 1 Selecting the data connection type

  3. Select a job type. The default migration type is Real-time. The migration scenarios include Entire DB and Database/Table partition.

    Figure 2 Setting the migration job type

    For details about synchronization scenarios, see Synchronization Scenarios.

  4. Configure network resources. Select the created PostgreSQL and DWS data connections and the migration resource group for which the network connection has been configured.

    Figure 3 Selecting data connections and a migration resource group

    If no data connection is available, click Create to go to the Manage Data Connections page of the Management Center console and click Create Data Connection to create a connection. For details, see Configuring DataArts Studio Data Connection Parameters.

    If no migration resource group is available, click Create to create one. For details, see Buying a DataArts Migration Resource Group Incremental Package.

  5. Check the network connectivity. After the data connections and migration resource group are configured, perform the following operations to check the connectivity between the data sources and the migration resource group.

  6. Configure source parameters.

    Select the databases and tables to be synchronized based on the following table.

    Table 5 Selecting the Kafka topics to be synchronized

    Synchronization Scenario

    Configuration Method

    Entire DB

    Select the PostgreSQL databases and tables to be migrated.
    Figure 4 Selecting databases and tables

    Both databases and tables can be customized. You can select one database and one table, or multiple databases and tables.

    Database/Table shard

    Add a logical table.
    • Logical Table Name: Enter the name of the table to be written to DWS.
    • Source Database Filter: You can enter a regular expression to filter all the database shards in the PostgreSQL instance to be written to the destination DWS aggregation table.
    • Source Table Filter: You can enter a regular expression to filter all the table shards in the source database shard to be written to the destination DWS aggregation table.
    Figure 5 Adding a logical table

    You can click Preview in the Operation column to preview an added logical table. When you preview a logical table, the more the source tables, the longer the waiting time.

    Figure 6 Previewing the logical table

  7. Configure destination parameters.

    • Set Database and Table Matching Policy.

      For details about the matching policy between source and destination databases and tables in each synchronization scenario, see the following table.

      Table 6 Database and table matching policy

      Synchronization Scenario

      Configuration Method

      Entire DB

      • Schema Matching Policy
        • Same name as the source database: Data will be synchronized to the DWS schema with the same name as the source PostgreSQL database.
        • Custom: Data will be synchronized to the DWS schema you specify.
      • Table Matching Policy
        • Same name as the source table: Data will be synchronized to the DWS table with the same name as the source PostgreSQL table.
        • Custom: Data will be synchronized to the DWS table you specify.
          Figure 7 Database and table matching policy in the entire database migration scenario
          NOTE:

          When you customize a matching policy, you can use built-in variables #{source_db_name} and #{source_table_name} to identify the source database name and table name. The table matching policy must contain #{source_table_name}.

      Database/Table shard

      • Destination Database Name: Data will be synchronized to the specified DWS schema.
      • Table Matching Policy: By default, the value is the same as the name of the logical table entered in the source configuration.
        Figure 8 Database and table matching policy in the sharding scenario
    • Configure DWS parameters.

      For details, see the following table.

      Figure 9 DWS parameters
      Table 7 DWS parameters

      Parameter

      Default Value

      Unit

      Description

      Write Mode

      UPSERT

      N/A

      • UPSERT MODE: batch update
      • COPY MODE: DWS-dedicated high-performance batch import

      Maximum Data Volume for Batch Write

      50000

      Count

      Number of data records written to DWS in a batch. You can adjust the value based on the table data size and job memory usage.

      Scheduled Batch Write Interval

      3

      Second

      Interval at which data is written to DWS

      Advanced Settings

      N/A

      N/A

      Some advanced functions can be configured using parameters. For details, see DWS advanced parameters.

      Table 8 DWS advanced parameters

      Parameter

      Type

      Default Value

      Unit

      Description

      sink.buffer-flush.max-size

      int

      512

      MB

      Maximum number of bytes in each batch of data written to DWS. You can adjust the value based on the memory and data size configured for the job.

      sink.keyby.enable

      boolean

      true

      N/A

      Whether to enable data distribution. If this function is enabled in multi-concurrency scenarios, data can be distributed to different processes based on specific rules and written to the destination, which improves the write performance.

      sink.keyby.mode

      string

      table

      N/A

      Data distribution mode. The following modes are available:

      • pk: Data is distributed by primary key value.
      • table: Data is distributed by table name.
        NOTE:
        • In multi-concurrency scenarios, if DDL is enabled, data can be distributed only by table name. Otherwise, data may be inconsistent.
        • If there is no DDL, you can select pk, which improves the write performance in multi-concurrency scenarios.

      sink.field.name.case-sensitive

      boolean

      true

      N/A

      Whether to enable case sensitivity for data synchronization. If this function is enabled, the database names, table names, and field names are case sensitive during data synchronization.

      sink.verify.column-number

      boolean

      false

      N/A

      Whether to verify the number of data columns. By default, the link synchronizes data in the same-name mapping mode. The system does not check whether all columns are synchronized.

      If this function is enabled and the number of columns at the source is different from that at the destination, the system determines that data is inconsistent. As a result, the job is abnormal.

      sink.server.timezone

      string

      Local time zone

      N/A

      Session time zone specified for connecting to the destination database. The standard time zone format is supported, for example, UTC+08:00.

      logical.delete.enabled

      boolean

      false

      N/A

      Whether to enable logical deletion. If this function is enabled, the destination must contain the deletion flag column. When data is deleted from the source database, the corresponding data in the destination database will not be deleted. Instead, the deletion flag column is set to true, indicating that the data is not contained at the source.

      logical.delete.column

      string

      logical_is_deleted

      N/A

      Name of the logical deletion column. The default value is logical_is_deleted. You can customize the value.

  8. Refresh and check the mapping between the source and destination tables. In addition, you can modify table attributes, add additional fields, and use the automatic table creation capability to create tables in the destination DWS database.

    Figure 10 Mapping between source and destination tables
    • Edit additional fields: Click Additional Field in the Operation column to add custom fields to the destination DWS table. For a new table, you can add additional fields to the existing fields in the source table. You can customize the field name, select the field type, and enter the field value.
      • Field Name: name of the new field in the destination DWS table
      • Field Type: type of the new field in the destination DWS table
      • (Optional) Field Type Length: length of the new field type in the destination DWS table
      • Field Value: Value source of the new field in the destination DWS table
        Table 9 Additional field value obtaining mode

        Type

        Example

        Constant

        Digits, letters, and special characters are supported. Color emoticons may cause a job submission failure.

        Built-in variable

        • Source host IP address: source.host
        • Source schema name: source.schema
        • Source table name: source.table
        • Destination schema name: target.schema
        • Destination table name: target.table

        Source table field

        Any field in the source table

        Do not change the name of the source table field when the job is running. Otherwise, the job may be abnormal.

        UDF

        • substring(#col, pos[, len]): obtains a substring of a specified length from the source column name. The substring range is [pos, pos+len).
        • date_format(#col, time_format[, src_tz, dst_tz]): formats the source column name based on a specified time format. The time zone can be converted using src_tz and dst_tz.
        • now([tz]): obtains the current time in a specified time zone.
        • if(cond_exp, str1, str2): returns str1 if the condition expression cond_exp is met and returns str2 otherwise.
        • concat(#col[, #str, ...]): concatenates multiple parameters, including source columns and strings.
        • from_unixtime(#col[, time_format]): formats a Unix timestamp based on a specified time format.
        • unix_timestamp(#col[, precision, time_format]): converts a time into a Unix timestamp of a specified time format and precision. time_format must be the same as that in the source data.
    • Automatic table creation: Click Auto Table Creation to automatically create tables in the destination database based on the configured mapping policy. After the tables are created, Existing table is displayed for them.
      Figure 11 Automatic table creation
      • DataArts Migration supports only automatic table creation. You need to manually create databases and schemas at the destination before using this function.
      • For details about the field type mapping for automatic table creation, see Field Type Mapping.

  9. Configure task parameters.

    Table 10 Task parameters

    Parameter

    Description

    Default Value

    Execution Memory

    Memory allocated for job execution, which automatically changes with the number of CPU cores

    8GB

    CPU Cores

    Value range: 2 to 32

    For each CPU core added, 4 GB execution memory and one concurrency are automatically added.

    2

    Maximum Concurrent Requests

    Maximum number of jobs that can be concurrently executed. This parameter does not need to be configured and automatically changes with the number of CPU cores.

    1

    Auto Retry

    Whether to enable automatic retry upon a job failure

    No

    Maximum Retries

    This parameter is displayed when Auto Retry is set to Yes.

    1

    Retry Interval (Seconds)

    This parameter is displayed when Auto Retry is set to Yes.

    120s

    Write Dirty Data

    Whether to record dirty data. By default, dirty data is not recorded. If there is a large amount of dirty data, the synchronization speed of the task is affected.

    • No: Dirty data is not recorded. This is the default value.

      Dirty data is not allowed. If dirty data is generated during the synchronization, the task fails and exits.

    • Yes: Dirty data is allowed, that is, dirty data does not affect task execution.
      When dirty data is allowed and its threshold is set:
      • If the generated dirty data is within the threshold, the synchronization task ignores the dirty data (that is, the dirty data is not written to the destination) and is executed normally.
      • If the generated dirty data exceeds the threshold, the synchronization task fails and exits.
        NOTE:

        Criteria for determining dirty data: Dirty data is meaningless to services, is in an invalid format, or is generated when the synchronization task encounters an error. If an exception occurs when a piece of data is written to the destination, this piece of data is dirty data. Therefore, data that fails to be written is classified as dirty data.

        For example, if data of the VARCHAR type at the source is written to a destination column of the INT type, dirty data cannot be written to the migration destination due to improper conversion. When configuring a synchronization task, you can configure whether to write dirty data during the synchronization and configure the number of dirty data records (maximum number of error records allowed in a single partition) to ensure task running. That is, when the number of dirty data records exceeds the threshold, the task fails and exits.

    No

    Dirty Data Policy

    This parameter is displayed when Write Dirty Data is set to Yes. The following policies are supported:

    • Do not archive: Dirty data is only recorded in job logs, but not stored.
    • Archive to OBS: Dirty data is stored in OBS and printed in job logs.

    Do not archive

    Write Dirty Data Link

    This parameter is displayed when Dirty Data Policy is set to Archive to OBS.

    Only links to OBS support dirty data writes.

    N/A

    Dirty Data Directory

    OBS directory to which dirty data will be written

    N/A

    Dirty Data Threshold

    This parameter is only displayed when Write Dirty Data is set to Yes.

    You can set the dirty data threshold as required.

    NOTE:
    • The dirty data threshold takes effect for each concurrency. For example, if the threshold is 100 and the concurrency is 3, the maximum number of dirty data records allowed by the job is 300.
    • Value -1 indicates that the number of dirty data records is not limited.

    100

    Custom attributes

    You can add custom attributes to modify some job parameters and enable some advanced functions. For details, see Job Performance Optimization.

    N/A

  10. Submit and run the job.

    After configuring the job, click Submit in the upper left corner to submit the job.

    Figure 12 Submitting the job

    After submitting the job, click Start on the job development page. In the displayed dialog box, set required parameters and click OK.

    Figure 13 Starting the job
    Table 11 Parameters for starting the job

    Parameter

    Description

    Offset Parameter

    • Incremental synchronization: Incremental data synchronization starts from a specified time point.
    • Full and incremental synchronization: All data is synchronized first, and then incremental data is synchronized in real time.

  11. Monitor the job.

    On the job development page, click Monitor to go to the Job Monitoring page. You can view the status and log of the job, and configure alarm rules for the job. For details, see Real-Time Migration Job O&M.

    Figure 14 Monitoring the job

Performance Optimization

If the synchronization speed is too slow, rectify the fault by referring to Job Performance Optimization.