Help Center/ DataArts Studio/ User Guide/ DataArts Migration (Real-Time Jobs)/ Tutorials/ Configuring a Job for Migrating Data from a Centralized/Distributed GaussDB to GaussDB(DWS)
Updated on 2025-08-08 GMT+08:00

Configuring a Job for Migrating Data from a Centralized/Distributed GaussDB to GaussDB(DWS)

This section describes how to configure a job for migrating data from a centralized/distributed GaussDB to GaussDB(DWS).

Supported Source and Destination Database Versions

Table 1 Supported database versions

Source Database

Destination Database

Centralized/Distributed GaussDB (kernel engine versions: 505.1.0, 505.1.0.SPC0100, and 505.2.0)

GaussDB(DWS) cluster of version 8.1.3, 8.2.0, or a later version, except GaussDB(DWS) 3.0

Database Account Permissions

Before you use DataArts Migration for data synchronization, ensure that the source and destination database accounts meet the requirements in the following table. The required account permissions vary depending on the synchronization task type.

Table 2 Database account permissions

Type

Required Permissions

Source database connection account

  1. REPLICATION permission, permissions inherited from the gs_role_replication built-in role, database CONNECT permission, schema USAGE permission, table SELECT permission, sequence SELECT permission, and the permission to query pg_ls_waldir() functions. For details, see How Do I Add Additional Permissions for PostgreSQL and GaussDB Data Sources?
  2. Remote connection permission

    To add the remote connection permission, perform the following steps:

    • Centralized: Add host all <user> 0.0.0.0/0 sha256 and host replication <user> 0.0.0.0/0 sha256 to the gs_hba.conf configuration file of the source database.
    • Distributed: Add host all <user> 0.0.0.0/0 sha256 to the gs_hba.conf configuration file of the CN node of the source database. Add host all <user> 0.0.0.0/0 sha256 and host replication <user> 0.0.0.0/0 sha256 to the gs_hba.conf configuration file of the DN node of the source database.
    • Run select pg_reload_conf(); in the source database as the system administrator, or restart the DB instance to apply the changes.
  3. Ensure that the corresponding connection ports are enabled in the security group and firewall policies.

    Security group and firewall policies: Common and replication connection ports are enabled for database connections.

    Centralized: The replication connection port number is the common connection port number plus 1.

    Distributed: The replication connection port number is the CN/DN connection port number plus 1. You can run the select * from pgxc_node statement in the source database to obtain the DN connection port.

Destination database connection account

The account must have the following permissions for each table in the destination database: INSERT, SELECT, UPDATE, DELETE, CONNECT, and CREATE.

  • You are advised to create independent database accounts for DataArts Migration task connections to prevent task failures caused by password modification.
  • After changing the account passwords for the source or destination databases, modify the connection information in Management Center as soon as possible to prevent automatic retries after a task failure. Automatic retries will lock the database accounts.

Supported Synchronization Objects

The following table lists the objects that can be synchronized using different links in DataArts Migration.

Table 3 Synchronization objects

Type

Note

Synchronization objects

  • DML operations INSERT, UPDATE, and DELETE can be synchronized.
  • DDL operations cannot be synchronized.
  • Only primary key tables can be synchronized.
  • Only partitioned tables whose partition keys are primary keys can be synchronized.
  • Adding new tables cannot be suspended.
  • Views, foreign keys, stored procedures, triggers, functions, events, virtual columns, unique constraints, and unique indexes cannot be synchronized.
  • Unlogged tables, temporary tables, system schemas, and system tables cannot be synchronized.
  • Table structures, common indexes, constraints (primary key, null, and non-null), and comments can be synchronized during automatic table creation.

Important Notes

In addition to the constraints on supported data sources and versions, connection account permissions, and synchronization objects, you also need to pay attention to the notes in the following table.

Table 4 Important notes

Type

Restriction

Database

  • Source database parameter requirements:
    • The password_encryption_type parameter is set to 1. Change the password to make it take effect.
    • The wal_level parameter is set to logical.
    • The value of parameter max_replication_slots must be greater than the number of used replication slots. You can determine the value based on the number of real-time jobs to be created.
  • The names of the source databases, schemas, and tables to be synchronized cannot contain /<.>\\'`|\? or !. Column names cannot be CTID, XMIN, CMIN, XMAX, CMAX, TABLEOID, XC_NODE_ID, or TID, which are forbidden by GaussDB(DWS). You are advised to use common characters to avoid task failures.
  • The name of an object in the destination database must contain 1 to 63 characters, start with a letter or underscore (_), and can contain letters, digits, underscores (_), and dollar signs ($).
  • The partition table trigger of the source database cannot be set to disable.

Usage

General:

  • During real-time synchronization, the IP addresses, ports, accounts, and passwords cannot be changed.
  • If a job is started for the first time, the start point cannot be specified. Logs of six hours are retained by default. That is, after the job is started for the first time and runs for x hours, you can stop the job and specify a start point in the previous x hours (x cannot be greater than the log retention period). You can set the log retention period through the slot.flush.delay.time parameter, which you can add by clicking Add Custom Attribute in Configure Task. The value of slot.flush.delay.time can be 1d, 1day, 2days, 1h, 1hour, 2hours, 1m, 1minute, or 2minutes.
  • By clicking Add Custom Attribute in Configure Task, you can add the custom.slot.name parameter to specify an inactive replication slot for incremental synchronization.
  • Logical replication slots cannot be automatically deleted after a job is stopped.

Full synchronization phase:

During task startup and full data synchronization, do not perform DDL operations on the source database. Otherwise, the task may fail.

Incremental synchronization phase:

  • Do not change the primary key or unique key (if the primary key does not exist) of the source database table. Otherwise, incremental data may be inconsistent or the task may fail.
  • Do not modify the replica identity attribute of tables in the source database. Otherwise, incremental data may be inconsistent or the task may fail.
  • When the number of replication slots of the GaussDB data source has reached the upper limit, new jobs cannot be executed. You can increase the upper limit of the number of replication slots by setting max_replication_slots or manually delete replication slots (automatic deletion is not supported). For details about how to delete replication slots, see How Do I Manually Delete Replication Slots from a PostgreSQL Data Source?

Troubleshooting:

If any problem occurs during task creation, startup, full synchronization, incremental synchronization, or completion, rectify the fault by referring to FAQs.

Other

  • The block_size value of the destination database must be greater than that of the source database.
  • Before starting the job, ensure that no long transaction is started in the source database. If a long transaction is started in the source database, the creation of the logical replication slot will be blocked. As a result, the task fails.
  • After the job is started, the active/standby switchover is not allowed for the source database.
  • Tables in the destination database can contain more columns than those in the source database. The following failures must be avoided:
    • Assume that extra columns in the destination database cannot be null and have no default values. If newly inserted data records are synchronized from the source database to the destination database, the extra columns will become null, which does not meet the requirements of the destination database and will cause the task to fail.
    • Assume that extra columns in the destination database must be fixed at a default value and have a unique constraint. If newly inserted data records are synchronized from the source database to the destination database, the extra columns will contain default values. That does not meet the requirements of the destination database.
  • During automatic table creation, the length of the char, varchar, nvarchar, enum, and set characters in the source database is automatically increased by byte in the destination GaussDB(DWS) database.

Procedure

This section uses real-time synchronization from GaussDB to GaussDB(DWS) as an example to describe how to configure a real-time data migration job. Before that, ensure that you have read the instructions described in Check Before Use and completed all the preparations.

  1. Create a real-time migration job by following the instructions in Creating a Real-Time Migration Job and go to the job configuration page.
  2. Select the data connection type. Select PostgreSQL for Source and DWS for Destination.

    Figure 1 Selecting the data connection type

  3. Select a job type. The default migration type is Real-time. The migration scenario is Entire DB.

    Figure 2 Setting the migration job type

    For details about synchronization scenarios, see Synchronization Scenarios.

  4. Configure network resources. Select the created GaussDB(RDS PostgreSQL) and GaussDB(DWS) data connections and the migration resource group for which the network connection has been configured.

    Figure 3 Selecting data connections and a migration resource group

    If no data connection is available, click Create to go to the Manage Data Connections page of the Management Center console and click Create Data Connection to create a connection. For details, see Configuring DataArts Studio Data Connection Parameters.

    If no migration resource group is available, click Create to create one. For details, see Buying a DataArts Migration Resource Group Incremental Package.

  5. Check the network connectivity. After the data connections and migration resource group are configured, perform the following operations to check the connectivity between the data sources and the migration resource group.

  6. Configure source parameters.

    Select the databases and tables to be synchronized based on the following table.

    Table 5 Selecting the databases and tables to be synchronized

    Synchronization Scenario

    Configuration Method

    Entire DB

    Select the GaussDB databases and tables to be migrated.
    Figure 4 Selecting databases and tables

    Both databases and tables can be customized. You can select one database and one table, or multiple databases and tables.

  7. Configure destination parameters.

    • Set Database and Table Matching Policy.

      For details about the matching policy between source and destination databases and tables in each synchronization scenario, see the following table.

      Table 6 Database and table matching policy

      Synchronization Scenario

      Configuration Method

      Entire DB

      • Schema Matching Policy
        • Same name as the source database: Data will be synchronized to the GaussDB(DWS) schema with the same name as the source GaussDB.
        • Custom: Data will be synchronized to the GaussDB(DWS) schema you specify.
      • Table Matching Policy
        • Same name as the source table: Data will be synchronized to the GaussDB(DWS) table with the same name as the source GaussDB table.
        • Custom: Data will be synchronized to the GaussDB(DWS) table you specify.
          Figure 5 Database and table matching policy in the entire database migration scenario
          NOTE:

          When you customize a matching policy, you can use built-in variables #{source_db_name} and #{source_table_name} to identify the source database name and table name. The table matching policy must contain #{source_table_name}.

    • Configure GaussDB(DWS) parameters.

      For details, see the following table.

      Figure 6 GaussDB(DWS) parameters
      Table 7 GaussDB(DWS) parameters

      Parameter

      Default Value

      Unit

      Description

      Write Mode

      UPSERT MODE

      N/A

      • UPSERT MODE: batch update
      • COPY MODE: DWS-dedicated high-performance batch import

      Max. data volume for batch writing

      50,000

      Pieces

      Number of data records written to GaussDB(DWS) in a batch. You can adjust the value based on the table data size and job memory usage.

      Scheduled Batch Write Interval

      3

      Second

      Interval at which data is written to GaussDB(DWS)

      Advanced Settings

      N/A

      N/A

      Some advanced functions can be configured using parameters. For details, see GaussDB(DWS) advanced parameters.

      Table 8 GaussDB(DWS) advanced parameters

      Parameter

      Type

      Default Value

      Unit

      Description

      sink.buffer-flush.max-size

      int

      512

      MB

      Maximum number of bytes in each batch of data written to GaussDB(DWS). You can adjust the value based on the memory and data size configured for the job.

      sink.keyby.enable

      boolean

      true

      N/A

      Whether to enable data distribution. If this function is enabled in multi-concurrency scenarios, data can be distributed to different processes based on specific rules and written to the destination, which improves the write performance.

      sink.keyby.mode

      string

      table

      N/A

      Data distribution mode. The following modes are available:

      • pk: Data is distributed by primary key value.
      • table: Data is distributed by table name.
        NOTE:
        • In multi-concurrency scenarios, if DDL is enabled, data can be distributed only by table name. Otherwise, data may be inconsistent.
        • If there is no DDL, you can select pk, which improves the write performance in multi-concurrency scenarios.

      sink.field.name.case-sensitive

      boolean

      true

      N/A

      Whether to enable case sensitivity for data synchronization. If this function is enabled, the database names, table names, and field names are case sensitive during data synchronization.

      sink.verify.column-number

      boolean

      false

      N/A

      Whether to verify the number of data columns. By default, data is synchronized from GaussDB to GaussDB(DWS) in the same-name mapping mode. The system does not check whether all columns are synchronized.

      If this function is enabled and the number of columns at the source is different from that at the destination, the system determines that data is inconsistent. As a result, the job is abnormal.

      sink.server.timezone

      string

      Local time zone

      N/A

      Session time zone specified for connecting to the destination database. The standard time zone format is supported, for example, UTC+08:00.

      logical.delete.enabled

      boolean

      false

      N/A

      Whether to enable logical deletion. If this function is enabled, the destination must contain the deletion flag column. When data is deleted from the source database, the corresponding data in the destination database will not be deleted. Instead, the deletion flag column is set to true, indicating that the data is not contained at the source.

      logical.delete.column

      string

      logical_is_deleted

      N/A

      Name of the logical deletion column. The default value is logical_is_deleted. You can customize the value.

  8. Refresh and check the mapping between the source and destination tables. In addition, you can modify table attributes, add additional fields, and use the automatic table creation capability to create tables in the destination GaussDB(DWS) database.

    Figure 7 Mapping between source and destination tables
    • Edit additional fields: Click Additional Field in the Operation column to add custom fields to the destination GaussDB(DWS) table. For a new table, you can add additional fields to the existing fields in the source table. You can customize the field name, select the field type, and enter the field value.
      • Field Name: name of the new field in the destination GaussDB(DWS) table
      • Field Type: type of the new field in the destination GaussDB(DWS) table
      • (Optional) Field Type Length: length of the new field type in the destination GaussDB(DWS) table
      • Field Value: Value source of the new field in the destination GaussDB(DWS) table
        Table 9 Additional field value obtaining mode

        Type

        Example

        Constant

        Digits, letters, and special characters are supported. Color emoticons may cause a job submission failure.

        Built-in variable

        • Source host IP address: source.host
        • Source schema name: source.schema
        • Source table name: source.table
        • Destination schema name: target.schema
        • Destination table name: target.table

        Source table field

        Any field in the source table

        Do not change the name of the source table field when the job is running. Otherwise, the job may be abnormal.

        UDF

        • substring(#col, pos[, len]): obtains a substring of a specified length from the source column name. The substring range is [pos, pos+len).
        • date_format(#col, time_format[, src_tz, dst_tz]): formats the source column name based on a specified time format. The time zone can be converted using src_tz and dst_tz.
        • now([tz]): obtains the current time in a specified time zone.
        • if(cond_exp, str1, str2): returns str1 if the condition expression cond_exp is met and returns str2 otherwise.
        • concat(#col[, #str, ...]): concatenates multiple parameters, including source columns and strings.
        • from_unixtime(#col[, time_format]): formats a Unix timestamp based on a specified time format.
        • unix_timestamp(#col[, precision, time_format]): converts a time into a Unix timestamp of a specified time format and precision. time_format must be the same as that in the source data.
    • Automatic table creation: Click Auto Table Creation to automatically create tables in the destination database based on the configured mapping policy. After the tables are created, Existing table is displayed for them.
      Figure 8 Automatic table creation
      • DataArts Migration supports only automatic table creation. You need to manually create databases and schemas at the destination before using this function.
      • For details about the field type mapping for automatic table creation, see Field Type Mapping.

  9. Configure task parameters.

    Table 10 Task parameters

    Parameter

    Description

    Default Value

    Execution Memory

    Memory allocated for job execution, which automatically changes with the number of CPU cores

    8GB

    CPU Cores

    Value range: 2 to 32

    For each CPU core added, 4 GB execution memory and one concurrency are automatically added.

    2

    Maximum Concurrent Requests

    Maximum number of jobs that can be concurrently executed. This parameter does not need to be configured and automatically changes with the number of CPU cores.

    1

    Auto Retry

    Whether to enable automatic retry upon a job failure

    No

    Maximum Retries

    This parameter is displayed when Auto Retry is set to Yes.

    1

    Retry Interval (Seconds)

    This parameter is displayed when Auto Retry is set to Yes.

    120s

    Write Dirty Data

    Whether to record dirty data. By default, dirty data is not recorded. If there is a large amount of dirty data, the synchronization speed of the task is affected.

    • No: Dirty data is not recorded. This is the default value.

      Dirty data is not allowed. If dirty data is generated during the synchronization, the task fails and exits.

    • Yes: Dirty data is allowed, that is, dirty data does not affect task execution.
      When dirty data is allowed and its threshold is set:
      • If the generated dirty data is within the threshold, the synchronization task ignores the dirty data (that is, the dirty data is not written to the destination) and is executed normally.
      • If the generated dirty data exceeds the threshold, the synchronization task fails and exits.
        NOTE:

        Criteria for determining dirty data: Dirty data is meaningless to services, is in an invalid format, or is generated when the synchronization task encounters an error. If an exception occurs when a piece of data is written to the destination, this piece of data is dirty data. Therefore, data that fails to be written is classified as dirty data.

        For example, if data of the VARCHAR type at the source is written to a destination column of the INT type, dirty data cannot be written to the migration destination due to improper conversion. When configuring a synchronization task, you can configure whether to write dirty data during the synchronization and configure the number of dirty data records (maximum number of error records allowed in a single partition) to ensure task running. That is, when the number of dirty data records exceeds the threshold, the task fails and exits.

    No

    Dirty Data Policy

    This parameter is displayed when Write Dirty Data is set to Yes. The following policies are supported:

    • Do not archive: Dirty data is only recorded in job logs, but not stored.
    • Archive to OBS: Dirty data is stored in OBS and printed in job logs.

    Do not archive

    Write Dirty Data Link

    This parameter is displayed when Dirty Data Policy is set to Archive to OBS.

    Only links to OBS support dirty data writes.

    -

    Dirty Data Directory

    OBS directory to which dirty data will be written

    -

    Dirty Data Threshold

    This parameter is only displayed when Write Dirty Data is set to Yes.

    You can set the dirty data threshold as required.

    NOTE:
    • The dirty data threshold takes effect for each concurrency. For example, if the threshold is 100 and the concurrency is 3, the maximum number of dirty data records allowed by the job is 300.
    • Value -1 indicates that the number of dirty data records is not limited.

    100

    Custom attributes

    You can add custom attributes to modify some job parameters and enable some advanced functions. For details, see Job Performance Optimization.

    -

  10. Submit and run the job.

    After configuring the job, click Submit in the upper left corner to submit the job.

    Figure 9 Submitting the job

    After submitting the job, click Start on the job development page. In the displayed dialog box, set required parameters and click OK.

    Figure 10 Starting the job
    Table 11 Parameters for starting the job

    Parameter

    Description

    Synchronous Mode

    • Incremental Synchronization: Incremental data synchronization starts from a specified time point.
    • Full and incremental synchronization: All data is synchronized first, and then incremental data is synchronized in real time.

  11. Monitor the job.

    On the job development page, click Monitor to go to the Job Monitoring page. You can view the status and log of the job, and configure alarm rules for the job. For details, see Real-Time Migration Job O&M.

    Figure 11 Monitoring the job

Performance Optimization

If the synchronization speed is too slow, rectify the fault by referring to Job Performance Optimization.