From a Relational Database

When the source link of a job is one of the relational databases listed in Link to Relational Databases (also listed here), configure the source job parameters based on Table 1.

  • Data Warehouse Service
  • RDS for MySQL
  • RDS for SQL Server
  • RDS for PostgreSQL
  • Dameng database
  • FusionInsight LibrA
  • Derecho (GaussDB)
  • MySQL
  • PostgreSQL
  • Oracle
  • IBM Db2
  • Microsoft SQL Server
Table 1 Parameter description

Category

Parameter

Description

Example Value

Basic parameters

Use SQL Statement

Whether you can use SQL statements to export data from a relational database

No

SQL Statement

When Use SQL Statement is set to Yes, enter an SQL statement here. CDM exports data based on the SQL statement.

select id,name from sqoop.user;

Schema/Tablespace

Name of the schema or tablespace from which data will be extracted. This parameter is displayed when Use SQL Statement is set to No. Click the icon next to the text box to go to the page for selecting a schema or directly enter a schema or tablespace.

If the desired schema or tablespace is not displayed, check whether the login account has the permissions required to query metadata.

NOTE:
The parameter value can contain wildcard characters (*), which is used to export all databases whose names start with a certain prefix or end with a certain suffix. The examples are as follows:
  • SCHEMA* indicates that all databases whose names starting with SCHEMA are exported.
  • *SCHEMA indicates that all databases whose names ending with SCHEMA are exported.
  • *SCHEMA* indicates that all databases whose names containing SCHEMA are exported.

SCHEMA_E

Table Name

Name of the table from which data will be extracted. This parameter is displayed when Use SQL Statement is set to No. Click the icon next to the text box to go to the page for selecting the table or directly enter a table name.

If the desired table is not displayed, confirm that the table exists or that the login account has the permissions required to query metadata.

This parameter can be configured as a macro variable of date and time and a path name can contain multiple macro variables. When the macro variable of date and time works with a scheduled job, the incremental data can be synchronized periodically. For details, see Incremental Synchronization Using the Macro Variables of Date and Time.

NOTE:
The table name can contain wildcard characters (*), which is used to export all tables whose names start with a certain prefix or end with a certain suffix. The number and types of fields in the tables must be the same. The examples are as follows:
  • table* indicates that all tables whose names starting with table are exported.
  • *table indicates that all tables whose names ending with table are exported.
  • *table* indicates that all tables whose names containing table are exported.

table

Advanced attributes

Partition Column

This parameter is displayed when Use SQL Statement is set to No, indicating that a field used to split data during data extraction. CDM splits a job into multiple tasks based on this field and executes the tasks concurrently. Fields with data distributed evenly are used, such as the sequential number field.

Click the icon next to the text box to go to the page for selecting a field or directly enter a field.

id

Where Clause

WHERE clause used to specify the data extraction range. This parameter is displayed when Use SQL Statement is set to No. If this parameter is not set, the entire table is extracted.

This parameter can be configured as a macro variable of date and time to extract data generated at a specific date. For details, see WHERE Clause.

DS='${dateformat(yyyy-MM-dd,-1,DAY)}'

Null in Partition Column

Whether the partition column can contain null values

Yes

Extract by Partition

When data is exported from an Oracle database, data can be extracted from each partition in the partitioned table. If this function is enabled, you can configure Table Partition to specify specific Oracle table partitions from which data is extracted.
  • This function does not support non-partitioned tables.
  • The database user must have the SELECT permission on the system views dba_tab_partitions and dba_tab_subpartitions.

No

Table Partition

Oracle table partition from which data is migrated. Separate multiple partitions with ampersands (&). If you do not set this parameter, all partitions will be migrated.

If there is a subpartition, enter the partition in the Partition.Subpartition format, for example, P2.SUBP1.

P0&P1&P2.SUBP1&P2.SUBP3

Split Job

If this parameter is set to Yes, the job is split into multiple subjobs based on the value of Job Split Field, and the subjobs are executed concurrently.

Yes

Job Split Field

Used to split a job into multiple subjobs for concurrent execution.

-

Minimum value of a split field

Specifies the minimum value of Job Split Field during data extraction.

-

Maximum Split Field Value

Specifies the maximum value of Job Split Field during data extraction.

-

Number of subjobs

Specifies the number of subjobs split from a job based on the data range specified by the minimum and maximum values of Job Split Field.

-

  • When an Oracle database is the migration source, if Partitioning Field or Extract by Partition is not configured, CDM automatically uses the ROWIDs to partition data.
  • In a migration from MySQL to DWS, the constraints on the incremental data migration function in MySQL Binlog mode are as follows:
    • A single cluster supports only one incremental migration job in MySQL Binlog mode in the current version.
    • In the current version, you are not allowed to delete or update 10,000 data records at a time.
    • Entire DB migration is not supported.
    • Data Definition Language (DDL) operations are not supported.
    • Event migration is not supported.
    • If you set Migrate Incremental Data to Yes, binlog_format in the source MySQL database must be set to ROW.
    • If you set Migrate Incremental Data to Yes and binlog file ID disorder occurs on the source MySQL instance due to cross-machine migration or rebuilding during incremental data migration, incremental data may be lost.
    • If a primary key exists in the destination table and incremental data is generated during the restart of the CDM cluster or full migration, duplicate data may exist in the primary key. As a result, the migration fails.
    • If the destination DWS database is restarted, the migration will fail. In this case, restart the CDM cluster and the migration job.
  • The recommended MySQL configuration is as follows:
    # Enable the bin-log function.
    log-bin=mysql-bin
    # ROW mode
    binlog-format=ROW
    # gtid mode. The recommended version is 5.6.10 or later.
    gtid-mode=ON
    enforce_gtid_consistency = ON
Table 2 Parameter description

Category

Parameter

Description

Example Value

Basic parameters

Schema/Tablespace

Indicates the name of the schema or tablespace from which data is to be extracted. Click the icon next to the text box to go to the page for selecting a schema or tablespace. During a sharded link job, the tablespace corresponding to the first backend link is displayed by default. You can also enter a schema or tablespace name.

If the desired schema or tablespace is not displayed, check whether the login account has the permissions required to query metadata.

NOTE:
The parameter value can contain wildcard characters (*), which is used to export all databases whose names start with a certain prefix or end with a certain suffix. The examples are as follows:
  • SCHEMA* indicates that all databases whose names starting with SCHEMA are exported.
  • *SCHEMA indicates that all databases whose names ending with SCHEMA are exported.
  • *SCHEMA* indicates that all databases whose names containing SCHEMA are exported.

SCHEMA_E

Table Name

Indicates the name of the table from which data is to be extracted. Click the icon next to the text box to go to the page for selecting the table or directly enter a table name.

If the desired table is not displayed, confirm that the table exists or that the login account has the permissions required to query metadata.

This parameter can be configured as a macro variable of date and time and a path name can contain multiple macro variables. When the macro variable of date and time works with a scheduled job, the incremental data can be synchronized periodically. For details, see Incremental Synchronization Using the Macro Variables of Date and Time.

NOTE:
The table name can contain wildcard characters (*), which is used to export all tables whose names start with a certain prefix or end with a certain suffix. The number and types of fields in the tables must be the same. The examples are as follows:
  • table* indicates that all tables whose names starting with table are exported.
  • *table indicates that all tables whose names ending with table are exported.
  • *table* indicates that all tables whose names containing table are exported.

table

Advanced Attributes

Where Clause

Specifies the data extraction range. If this parameter is not set, the entire table is extracted.

This parameter can be configured as a macro variable of date and time to extract data generated at a specific date. For details, see WHERE Clause.

DS='${dateformat(yyyy-MM-dd,-1,DAY)}'

  • If the Source Link Name is the backend link of the sharded link, the job is a common MySQL job.
  • When creating a job whose source end is a sharded link, you can add a custom field with the sample value of ${custom(host)} to the source field during field mapping. This field is used to view the data source of the table after the data of multiple tables across databases is migrated to the same table. The following sample values are supported:
    • ${custom(host)}
    • ${custom(database)}
    • ${custom(fromLinkName)}
    • ${custom(schemaName)}
    • ${custom(tableName)}