このページは、お客様の言語ではご利用いただけません。Huawei Cloudは、より多くの言語バージョンを追加するために懸命に取り組んでいます。ご協力ありがとうございました。
- What's New
- Function Overview
- Service Overview
- Getting Started
-
User Guide
- Permissions Management
- Supported Data Sources
- Creating and Managing a CDM Cluster
-
Creating a Link in a CDM Cluster
- Creating a Link Between CDM and a Data Source
-
Configuring Link Parameters
- OBS Link Parameters
- PostgreSQL/SQLServer Link Parameters
- GaussDB(DWS) Link Parameters
- RDS for MySQL/MySQL Database Link Parameters
- Oracle Database Link Parameters
- DLI Link Parameters
- Hive Link Parameters
- HBase Link Parameters
- HDFS Link Parameters
- FTP/SFTP Link Parameters
- Redis Link Parameters
- DDS Link Parameters
- CloudTable Link Parameters
- MongoDB Link Parameters
- Cassandra Link Parameters
- DIS Link Parameters
- Kafka Link Parameters
- DMS Kafka Link Parameters
- CSS Link Parameters
- Elasticsearch Link Parameters
- Dameng Database Link Parameters
- SAP HANA Link Parameters
- Shard Link Parameters
- MRS Hudi Link Parameters
- MRS ClickHouse Link Parameters
- ShenTong Database Link Parameters
- LogHub (SLS) Link Parameters
- Doris Link Parameters
- YASHAN Link Parameters
- Uploading a CDM Link Driver
- Creating a Hadoop Cluster Configuration
-
Creating a Job in a CDM Cluster
- Table/File Migration Jobs
- Creating an Entire Database Migration Job
-
Configuring CDM Source Job Parameters
- From OBS
- From HDFS
- From HBase/CloudTable
- From Hive
- From DLI
- From FTP/SFTP
- From HTTP
- From PostgreSQL/SQL Server
- From DWS
- From SAP HANA
- From MySQL
- From Oracle
- From a Database Shard
- From MongoDB/DDS
- From Redis
- From DIS
- From Kafka/DMS Kafka
- From Elasticsearch or CSS
- From MRS Hudi
- From MRS ClickHouse
- From a Dameng Database
- From LogHub (SLS)
- From a ShenTong Database
- From Doris
- From YASHAN
- Configuring CDM Destination Job Parameters
- Configuring CDM Job Field Mapping
- Configuring a Scheduled CDM Job
- Managing CDM Job Configuration
- Managing a CDM Job
- Managing CDM Jobs
- Viewing Traces
-
Key Operation Guide
- Incremental Migration
- Using Macro Variables of Date and Time
- Migration in Transaction Mode
- Encryption and Decryption During File Migration
- MD5 Verification
- Configuring Field Converters
- Adding Fields
- Migrating Files with Specified Names
- Regular Expressions for Separating Semi-structured Text
- Recording the Time When Data Is Written to the Database
- File Formats
- Converting Unsupported Data Types
-
Tutorials
- Creating an MRS Hive Link
- Creating a MySQL Link
- Migrating Data from MySQL to MRS Hive
- Migrating Data from MySQL to OBS
- Migrating Data from MySQL to DWS
- Migrating an Entire MySQL Database to RDS
- Migrating Data from Oracle to CSS
- Migrating Data from Oracle to DWS
- Migrating Data from OBS to CSS
- Migrating Data from OBS to DLI
- Migrating Data from MRS HDFS to OBS
- Migrating the Entire Elasticsearch Database to CSS
-
Best Practices
-
Tutorials
- Creating an MRS Hive Link
- Creating a MySQL Link
- Migrating Data from MySQL to MRS Hive
- Migrating Data from MySQL to OBS
- Migrating Data from MySQL to DWS
- Migrating an Entire MySQL Database to RDS
- Migrating Data from Oracle to CSS
- Migrating Data from Oracle to DWS
- Migrating Data from OBS to CSS
- Migrating Data from OBS to DLI
- Migrating Data from MRS HDFS to OBS
- Migrating the Entire Elasticsearch Database to CSS
-
Advanced Data Migration Guidance
- Incremental Migration
- Using Macro Variables of Date and Time
- Migration in Transaction Mode
- Encryption and Decryption During File Migration
- MD5 Verification
- Configuring Field Converters
- Migrating Files with Specified Names
- Regular Expressions for Separating Semi-structured Text
- Recording the Time When Data Is Written to the Database
- File Formats
- Scheduling a CDM Job by Transferring Parameters Using DataArts Factory
- Enabling Incremental Data Migration Through DataArts Factory
- Creating Table Migration Jobs in Batches Using CDM Nodes
- Simplified Migration of Trade Data to the Cloud and Analysis
- Migration of IoV Big Data to the Lake Without Loss
-
Tutorials
- Performance White Paper
- Security White Paper
- API Reference
-
FAQs
-
General
- What Are the Differences Between CDM and Other Data Migration Services?
- What Are the Advantages of CDM?
- What Are the Security Protection Mechanisms of CDM?
- How Do I Reduce the Cost of Using CDM?
- Will I Be Billed If My CDM Cluster Does Not Use the Data Transmission Function?
- Why Am I Billed Pay per Use When I Have Purchased a Yearly/Monthly CDM Incremental Package?
- How Do I Check the Remaining Validity Period of a Package?
- Will My Data Be Retained If My Package Expires or My Pay-per-Use Resources Are in Arrears?
- Can CDM Be Shared by Different Tenants?
- Can I Upgrade a CDM Cluster?
- How Is the Migration Performance of CDM?
- What Is the Number of Concurrent Jobs for Different CDM Cluster Versions?
- Can I Stop a CDM Cluster?
-
Functions
- Does CDM Support Incremental Data Migration?
- Does CDM Support Field Conversion?
- What Component Versions Are Recommended for Migrating Hadoop Data Sources?
- What Data Formats Are Supported When the Data Source Is Hive?
- Can I Synchronize Jobs to Other Clusters?
- Can I Create Jobs in Batches?
- Can I Schedule Jobs in Batches?
- How Do I Back Up CDM Jobs?
- How Do I Configure the Connection If Only Some Nodes in the HANA Cluster Can Communicate with the CDM Cluster?
- How Do I Use Java to Invoke CDM RESTful APIs to Create Data Migration Jobs?
- How Do I Connect the On-Premises Intranet or Third-Party Private Network to CDM?
- Does CDM Support Parameters or Variables?
- How Do I Set the Number of Concurrent Extractors for a CDM Migration Job?
- Does CDM Support Real-Time Migration of Dynamic Data?
- Can I Stop CDM Clusters?
- How Do I Obtain the Current Time Using an Expression?
- What Is the Time Format for the Where Clause Parameters for Creating a Migration Job?
- Can CDM Migrate Field Comments from a Source Table to a Destination Table?
-
Troubleshooting
- What Should I Do If the Log Prompts that the Date Format Fails to Be Parsed?
- What Can I Do If the Map Field Tab Page Cannot Display All Columns?
- How Do I Select Distribution Columns When Using CDM to Migrate Data to DWS?
- What Do I Do If the Error Message "value too long for type character varying" Is Displayed When I Migrate Data to DWS?
- What Can I Do If Error Message "Unable to execute the SQL statement" Is Displayed When I Import Data from OBS to SQL Server?
- What Should I Do If the Cluster List Is Empty, I Have No Access Permission, or My Operation Is Denied?
- Why Is Error ORA-01555 Reported During Migration from Oracle to DWS?
- What Should I Do If the MongoDB Connection Migration Fails?
- What Should I Do If a Hive Migration Job Is Suspended for a Long Period of Time?
- What Should I Do If an Error Is Reported Because the Field Type Mapping Does Not Match During Data Migration Using CDM?
- What Should I Do If a JDBC Connection Timeout Error Is Reported During MySQL Migration?
- What Should I Do If a CDM Migration Job Fails After a Link from Hive to DWS Is Created?
- How Do I Use CDM to Export MySQL Data to an SQL File and Upload the File to an OBS Bucket?
- What Should I Do If CDM Fails to Migrate Data from OBS to DLI?
- What Should I Do If Error Message "Configuration Item [linkConfig.createBackendLinks] Does Not Exist" Is Displayed During Data Link Creation or Error Message "Configuration Item [throttlingConfig.concurrentSubJobs] Does Not Exist" Is Displayed During Job Creation?
- What Should I Do If Message "CORE_0031:Connect time out. (Cdm.0523)" Is Displayed During the Creation of an MRS Hive Link?
- What Should I Do If Message "CDM Does Not Support Auto Creation of an Empty Table with No Column" Is Displayed When I Enable Auto Table Creation?
- What Should I Do If I Cannot Obtain the Schema Name When Creating an Oracle Relational Database Migration Job?
- What Should I Do If invalid input syntax for integer: "true" Is Displayed During MySQL Database Migration?
- What Should I Do If the Migration Source Is Oracle and Error Message "snapshot too old" Is Displayed?
- What Should I Do If Error "Identifier name is too long" Is Reported During Entire DB Migration to Hive?
- What Should I Do If Data Is Lost During Migration to DLI?
- What Should I Do If the Oracle Link Connectivity Test Is Successful on the Link Creation Page but Fails on the Links Page?
- What Should I Do If Destination Fields Are Not Displayed When Auto Table Creation Is Enabled in the Destination Job Configuration?
- What Should I Do If a Job Exported from a Cluster Fails to Be Imported to Another Cluster?
- What Should I Do If an Error Message Is Displayed Indicating that the Block Is Missing During HDFS File Migration?
- What Should I Do If the CDM Job Management Page Cannot Be Accessed and a Message Is Displayed Indicating that the Network or Server Cannot Be Accessed?
- What Should I Do If a CDM Cluster of Version 2.8.6 Fails to Migrate Data from OBS to DLI?
- What Should I Do If Error Message "Read timedout" Is Displayed During DWS Data Migration?
- What Should I Do If No Database or Table Information Can Be Obtained Through a Hive Link?
- What Should I Do If Error "get filesystem" Is Reported During the Creation of a FusionInsight HDFS Link?
- What Should I Do If Error "Invoke DLI service api failed" Is Reported When the Data Migration from MySQL to DLI Is About to Complete?
- What Should I Do If a MongoDB Field Fails to Be Migrated to the Destination?
- What Should I Do If a Field Is Escaped and Contains a Backslash (\) During the Migration of a DLI Foreign Table (OBS File) to DWS?
- What Should I Do If "Error occurs during loader run" Is Reported During Migration from PostgreSQL to Hive?
- What Should I Do If Error Message "Lost connection to MySQL server during query" Is Displayed During Migration from MySQL to DWS?
- What Should I Do If Error Message "For input string: "false" Is Displayed During Field Type Conversion During MySQL-to-DLI Migration?
- What Should I Do If an Error Occurs During the Migration of TINYINT Data from MySQL to DWS?
- What Shoud I Do If the Data Volume Is Inconsistent Before and After Data Migration?
- What Should I Do If an Error Is Reported Indicating an Incorrect Username or Password During Link Creation but They Are Correct Indeed?
- What Should I Do If a Message Is Displayed Indicating that the Fields in the Lower Camel Case Do Not Exist During Data Migration from a Database to OBS?
- What Should I Do If Error "invalid utf-8 character string" Is Reported When CSV Data Is Inserted into the MySQL Database?
- What Should I Do If a Scheduled Job Fails and Link Connectivity Is Abnormal?
- What Should I Do If CSV Data Fails to Be Inserted into a MySQL Database?
- What Should I Do If Error "timeout waiting for connection from pool" Is Reported During ES Writing?
- Why Is Error ORA-01555 Reported During Migration from Oracle to DWS?
- What Shoud I Do If the FTP Connectivity Test Fails and an Internal Server Error Is Reported?
- What Should I Do If All Users Except User root Cannot Access RDS for MySQL?
- Hudi Source Case Library
-
Hudi Destination Case Library
- What Should I Do If the Auto Creation of the Hudi Table Fails Due to a Schema Mismatch?
- What Should I Do If a Hudi Job Stays in Booting Status for a Long Time and then Fails and the "Read timed out" Error Is Contained in the Log?
- What Should I Do If the Number of Read Rows Is the Same as That of Write Rows, Both Numbers No Longer Increase, and the Job Stays in Running State?
- What Should I Do If a Job Stays in Running State but the Number of Written Rows Is 0?
- What Should I Do If Data Fails to Be Written to Hudi Using Spark SQL?
- What Should I Do If a Job Fails Due to Intermittent Disconnection, Timeout, or Connection Termination of the Source Link?
-
General
- General Reference
Show all
Copied.
To Hive
If the destination link of a job is a Hive link, configure the destination job parameters based on Table 1.
Parameter |
Description |
Example Value |
---|---|---|
Database Name |
Database name. Click the icon next to the text box. The dialog box for selecting the database is displayed. |
default |
Table Name |
Destination table name. Click the icon next to the text box. The dialog box for selecting the table is displayed. This parameter can be configured as a macro variable of date and time and a path name can contain multiple macro variables. When the macro variable of date and time works with a scheduled job, the incremental data can be synchronized periodically. For details, see Incremental Synchronization Using the Macro Variables of Date and Time.
NOTE:
If you have configured a macro variable of date and time and schedule a CDM job through DataArts Studio DataArts Factory, the system replaces the macro variable of date and time with (Planned start time of the data development job – Offset) rather than (Actual start time of the CDM job – Offset). |
TBL_X |
Auto Table Creation |
This parameter is displayed only when the source is a relational database. The options are as follows:
NOTE:
|
Non-auto creation |
Clear Data Before Import |
Whether the data in the destination table is cleared before data import. The options are as follows:
|
Yes |
Partition to Clear |
This parameter is available when Clear Data Before Import is set to Yes. When you enter the information about the partitions to be cleared, the data in the partitions will be cleared. |
Single partition: year=2020,location=sun Multiple partitions: ['year=2020,location=sun', 'year=2021,location=earth'] |
Executing Analyze Statements |
After all data is written, the ANALYZE TABLE statement is asynchronously executed to accelerate the Hive table query. The SQL statement is as follows:
NOTE:
Parameter Executing Analyze Statements applies only to the migration of a single table. |
Yes |
- When Hive serves as the destination end, a table whose storage format is ORC is automatically created.
- Due to file format restrictions, complex data can be written only in ORC or Parquet format.
- If the source Hive contains both the array and map types of data, the destination table format can only be the ORC or parquet complex type. If the destination table format is RC or TEXT, the source data will be processed and can be successfully written.
- As the map type is an unordered data structure, the data type may change after a migration.
- If Hive serves as the migration destination and the storage format is Textfile, delimiters must be explicitly specified in the statement for creating Hive tables. The following is an example:
CREATE TABLE csv_tbl( smallint_value smallint, tinyint_value tinyint, int_value int, bigint_value bigint, float_value float, double_value double, decimal_value decimal(9, 7), timestmamp_value timestamp, date_value date, varchar_value varchar(100), string_value string, char_value char(20), boolean_value boolean, binary_value binary, varchar_null varchar(100), string_null string, char_null char(20), int_null int ) ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde' WITH SERDEPROPERTIES ( "separatorChar" = "\t", "quoteChar" = "'", "escapeChar" = "\\" ) STORED AS TEXTFILE;
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot