Esta página aún no está disponible en su idioma local. Estamos trabajando arduamente para agregar más versiones de idiomas. Gracias por tu apoyo.
- What's New
- Function Overview
- Service Overview
- Getting Started
-
User Guide
- Permissions Management
- Managing Clusters
-
Managing Links
- Supported Data Sources
- Creating Links
- Managing Drivers
- Managing Agents
- Managing Cluster Configurations
- Link to a Common Relational Database
- Link to an RDS for MySQL/MySQL Database
- Link to an Oracle Database
- Link to a Database Shard
- Link to DLI
- Link to Hive
- Link to HBase
- Link to HDFS
- Link to OBS
- Link to an FTP or SFTP Server
- Link to Redis/DCS
- Link to DDS
- Link to CloudTable
- Link to MongoDB
- Link to Cassandra
- Link to Kafka
- Link to DMS Kafka
- Link to Elasticsearch/CSS
- Managing Jobs
- Auditing
-
Tutorials
- Creating an MRS Hive Link
- Creating a MySQL Link
- Migrating Data from MySQL to MRS Hive
- Migrating Data from MySQL to OBS
- Migrating Data from MySQL to DWS
- Migrating an Entire MySQL Database to RDS
- Migrating Data from Oracle to CSS
- Migrating Data from Oracle to DWS
- Migrating Data from OBS to CSS
- Migrating Data from OBS to DLI
- Migrating Data from MRS HDFS to OBS
- Migrating the Entire Elasticsearch Database to CSS
- More Cases and Practices
-
Advanced Data Migration Guidance
- Incremental Migration
- Using Macro Variables of Date and Time
- Migration in Transaction Mode
- Encryption and Decryption During File Migration
- MD5 Verification
- Field Conversion
- Migrating Files with Specified Names
- Regular Expressions for Separating Semi-structured Text
- Recording the Time When Data Is Written to the Database
- File Formats
-
Best Practices
-
Advanced Data Migration Guidance
- Incremental Migration
- Using Macro Variables of Date and Time
- Migration in Transaction Mode
- Encryption and Decryption During File Migration
- MD5 Verification
- Field Conversion
- Migrating Files with Specified Names
- Regular Expressions for Separating Semi-structured Text
- Recording the Time When Data Is Written to the Database
- File Formats
- Scheduling a CDM Job by Transferring Parameters Using DataArts Factory
- Incremental Migration on CDM Supported by DLF
- Creating Table Migration Jobs in Batches Using CDM Nodes
- Case: Trade Data Statistics and Analysis
-
Advanced Data Migration Guidance
- Performance White Paper
- Security White Paper
-
API Reference
- Before You Start
- API Overview
- Calling APIs
- Application Example
- API
-
Public Data Structures
-
Link Parameter Description
- Link to a Relational Database
- Link to OBS
- Link to OSS on Alibaba Cloud
- Link to KODO/COS
- Link to HDFS
- Link to HBase
- Link to CloudTable
- Link to Hive
- Link to an FTP or SFTP Server
- Link to MongoDB
- Link to Redis/DCS (to Be Brought Offline)
- Link to NAS/SFS (to Be Brought Offline)
- Link to Kafka
- Link to Elasticsearch/Cloud Search Service
- Link to DLI
- Link to CloudTable OpenTSDB
- Link to Amazon S3
- Link to DMS Kafka
- Source Job Parameters
- Destination Job Parameters
- Job Parameter Description
-
Link Parameter Description
- Permissions Policies and Supported Actions
- Appendix
-
FAQs
-
General
- What Are the Differences Between CDM and Other Data Migration Services?
- What Are the Advantages of CDM?
- What Are the Security Protection Mechanisms of CDM?
- How Do I Reduce the Cost of Using CDM?
- Why Am I Billed Pay per Use When I Have Purchased a Yearly/Monthly CDM Incremental Package?
- How Do I Check the Remaining Validity Period of a Package?
- Will My Data Be Retained If My Package Expires or My Pay-per-Use Resources Are in Arrears?
- Can CDM Be Shared by Different Tenants?
- Can I Upgrade a CDM Cluster?
- How Is the Migration Performance of CDM?
- What Is the Number of Concurrent Jobs for Different CDM Cluster Versions?
-
Functions
- Does CDM Support Incremental Data Migration?
- Does CDM Support Field Conversion?
- What Component Versions Are Recommended for Migrating Hadoop Data Sources?
- What Data Formats Are Supported When the Data Source Is Hive?
- Can I Synchronize Jobs to Other Clusters?
- Can I Create Jobs in Batches?
- Can I Schedule Jobs in Batches?
- How Do I Back Up CDM Jobs?
- How Do I Configure the Connection If Only Some Nodes in the HANA Cluster Can Communicate with the CDM Cluster?
- How Do I Use Java to Invoke CDM RESTful APIs to Create Data Migration Jobs?
- How Do I Connect the On-Premises Intranet or Third-Party Private Network to CDM?
- Does CDM Support Parameters or Variables?
- How Do I Set the Number of Concurrent Extractors for a CDM Migration Job?
- Does CDM Support Real-Time Migration of Dynamic Data?
- Can I Stop CDM Clusters?
- How Do I Obtain the Current Time Using an Expression?
-
Troubleshooting
- What Should I Do If the Log Prompts that the Date Format Fails to Be Parsed?
- What Can I Do If the Map Field Tab Page Cannot Display All Columns?
- How Do I Select Distribution Columns When Using CDM to Migrate Data to DWS?
- What Do I Do If the Error Message "value too long for type character varying" Is Displayed When I Migrate Data to DWS?
- What Can I Do If Error Message "Unable to execute the SQL statement" Is Displayed When I Import Data from OBS to SQL Server?
- What Should I Do If the Cluster List Is Empty, I Have No Access Permission, or My Operation Is Denied?
- Why Is Error ORA-01555 Reported During Migration from Oracle to DWS?
- What Should I Do If the MongoDB Connection Migration Fails?
- What Should I Do If a Hive Migration Job Is Suspended for a Long Period of Time?
- What Should I Do If an Error Is Reported Because the Field Type Mapping Does Not Match During Data Migration Using CDM?
- What Should I Do If a JDBC Connection Timeout Error Is Reported During MySQL Migration?
- What Should I Do If a CDM Migration Job Fails After a Link from Hive to DWS Is Created?
- How Do I Use CDM to Export MySQL Data to an SQL File and Upload the File to an OBS Bucket?
- What Should I Do If CDM Fails to Migrate Data from OBS to DLI?
- What Should I Do If Error Message "Configuration Item [linkConfig.createBackendLinks] Does Not Exist" Is Displayed During Data Link Creation or Error Message "Configuration Item [throttlingConfig.concurrentSubJobs] Does Not Exist" Is Displayed During Job Creation?
- What Should I Do If Message "CORE_0031:Connect time out. (Cdm.0523)" Is Displayed During the Creation of an MRS Hive Link?
- What Should I Do If Message "CDM Does Not Support Auto Creation of an Empty Table with No Column" Is Displayed When I Enable Auto Table Creation?
- What Should I Do If I Cannot Obtain the Schema Name When Creating an Oracle Relational Database Migration Job?
-
General
From a Relational Database
Sample JSON File
"from-config-values": { "configs": [ { "inputs": [ { "name": "fromJobConfig.useSql", "value": "false" }, { "name": "fromJobConfig.schemaName", "value": "rf_database" }, { "name": "fromJobConfig.tableName", "value": "rf_from" }, { "name": "fromJobConfig.columnList", "value": "AA&BB" }, { "name": "fromJobConfig.incrMigration", "value": "false" }, { "name": "fromJobConfig.createOutTable", "value": "false" } ], "name": "fromJobConfig" } ] }
Parameter Description
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
fromJobConfig.useSql |
Yes |
Boolean |
Whether to use the customized SQL statement to export data when exporting relational database data |
fromJobConfig.sql |
No |
String |
Customized SQL statement. CDM executes the SQL statement to export data. |
fromJobConfig.schemaName |
Yes |
String |
Database mode or tablespace. For example, public.
The parameter value can contain wildcard characters (*), which is used to export all databases whose names start with a certain prefix or end with a certain suffix. The examples are as follows:
|
fromJobConfig.tableName |
Yes |
String |
Table name. For example, TBL_EXAMPLE.
The table name can contain wildcard characters (*), which is used to export all tables whose names start with a certain prefix or end with a certain suffix. The number and types of fields in the tables must be the same. The examples are as follows:
|
fromJobConfig.whereClause |
No |
String |
WHERE clause used to specify the data to be extracted. If no WHERE clause is configured, the entire table will be extracted. For example, age > 18 and age <= 60. |
fromJobConfig.columnList |
No |
String |
List of fields to be extracted. Use & to separate field names. For example, id&gid&name. |
fromJobConfig.partitionColumn |
No |
String |
Partition field to be extracted, by which a job is split in multiple sub-jobs executed concurrently. For example, id. |
fromJobConfig.usePartition |
No |
Boolean |
When data is exported from the Oracle database, data can be extracted from each partition in a partitioned table. When this function is enabled, you can use the fromJobConfig.partitionList parameter to specify the partitions in the Oracle table. This function does not support non-partitioned tables. |
fromJobConfig.partitionList |
No |
String |
Oracle table partitions to be migrated. Separate multiple partitions with ampersands (&). If you do not set this parameter, all partitions will be migrated. |
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.