Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Situation Awareness
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive

Step 3: DataArts Migration

Updated on 2024-12-12 GMT+08:00

This topic describes how to use DataArts Studio DataArts Migration to migrate source data to the cloud in batches.

Creating a Cluster

DataArts Migration clusters can migrate data to the cloud and integrate data into the data lake. It provides wizard-based configuration and management and can integrate data from a single table or an entire database incrementally or periodically. The DataArts Studio basic package contains a CDM cluster. If the cluster cannot meet your requirements, you can buy a CDM incremental package.

For details about how to buy a CDM incremental package, see Buying a DataArts Migration Incremental Package.

Creating Source and Destination Links for Data Migration

  1. Log in to the CDM console and choose Cluster Management in the left navigation pane.

    Another method: Log in to the DataArts Studio console by following the instructions in Accessing the DataArts Studio Instance Console. On the DataArts Studio console, locate a workspace and click DataArts Migration to access the CDM console.

    Figure 1 Cluster list
    NOTE:

    The Source column is displayed only when you access the DataArts Migration page from the DataArts Studio console.

  2. In the left navigation pane, choose Cluster Management. In the cluster list, locate the required cluster and click Job Management.

    Figure 2 Cluster management

  3. On the Job Management page, click Links.

    Figure 3 Links

  4. Create two links, one connecting to OBS to read source data stored on OBS, and the other connecting to MRS Hive to write data to the MRS Hive database.

    Click Create Link. On the page displayed, select Object Storage Service (OBS) and click Next. Then, set the link parameters and click Save.
    Figure 4 Creating an OBS link
    Table 1 Parameter description

    Parameter

    Description

    Example Value

    Name

    Link name, which should be defined based on the data source type, so it is easier to remember what the link is for

    obs_link

    OBS Endpoint

    An endpoint is the request address for calling an API. Endpoints vary depending on services and regions. You can obtain the OBS bucket endpoint by either of the following means:

    To obtain the endpoint of an OBS bucket, go to the OBS console and click the bucket name to go to its details page.

    NOTE:
    • If the CDM cluster and OBS bucket are not in the same region, the CDM cluster cannot access the OBS bucket.
    • Do not change the password or user when the job is running. If you do so, the password will not take effect immediately and the job will fail.

    obs.myregion.mycloud.com

    Port

    Data transmission port. The HTTPS port number is 443 and the HTTP port number is 80.

    443

    OBS Bucket Type

    Select a value from the drop-down list, generally, Object Storage.

    Object Storage

    AK

    AK and SK are used to log in to the OBS server.

    You need to create an access key for the current account and obtain an AK/SK pair.

    To obtain an access key, perform the following steps:
    1. Log in to the management console, move the cursor to the username in the upper right corner, and select My Credentials from the drop-down list.
    2. On the My Credentials page, choose Access Keys, and click Create Access Key. See Figure 5.
      Figure 5 Clicking Create Access Key
    3. Click OK and save the access key file as prompted. The access key file will be saved to your browser's configured download location. Open the credentials.csv file to view Access Key Id and Secret Access Key.
      NOTE:
      • Only two access keys can be added for each user.
      • To ensure access key security, the access key is automatically downloaded only when it is generated for the first time and cannot be obtained from the management console later. Keep them properly.

    -

    SK

    -

    Link Attributes

    (Optional) Displayed when you click Show Advanced Attributes.

    You can click Add to add custom attributes for the link.

    Only connectionTimeout, socketTimeout, and idleConnectionTime are supported.

    The following are some examples:

    • socketTimeout: timeout interval for data transmission at the socket layer, in milliseconds
    • connectionTimeout: timeout interval for establishing an HTTP/HTTPS connection, in milliseconds

    -

    On the Links tab page, click Create Link again. On the page displayed, select MRS Hive and click Next. Then, set the link parameters and click Save.
    Figure 6 Creating an MRS Hive link
    Table 2 MRS Hive link parameters

    Parameter

    Description

    Example Value

    Name

    Link name, which should be defined based on the data source type, so it is easier to remember what the link is for

    hivelink

    Manager IP

    Floating IP address of MRS Manager. Click Select next to the Manager IP text box to select an MRS cluster. CDM automatically fills in the authentication information.
    NOTE:

    DataArts Studio does not support MRS clusters whose Kerberos encryption type is aes256-sha2,aes128-sha2, and only supports MRS clusters whose Kerberos encryption type is aes256-sha1,aes128-sha1.

    127.0.0.1

    Authentication Method

    Authentication method used for accessing MRS
    • SIMPLE: Select this for non-security mode.
    • KERBEROS: Select this for security mode.

    SIMPLE

    HIVE Version

    Set this to the Hive version on the server.

    HIVE_3_X

    Username

    If Authentication Method is set to KERBEROS, you must provide the username and password used for logging in to MRS Manager. If you need to create a snapshot when exporting a directory from HDFS, the user configured here must have the administrator permission on HDFS.

    To create a data connection for an MRS security cluster, do not use user admin. The admin user is the default management page user and cannot be used as the authentication user of the security cluster. You can create an MRS user and set Username and Password to the username and password of the created MRS user when creating an MRS data connection.
    NOTE:
    • If the CDM cluster version is 2.9.0 or later and the MRS cluster version is 3.1.0 or later, the created user must have the permissions of the Manager_viewer role to create links on CDM. To perform operations on databases, tables, and columns of an MRS component, you also need to add the database, table, and column permissions of the MRS component to the user by following the instructions in the MRS documentation.
    • If the CDM cluster version is earlier than 2.9.0 or the MRS cluster version is earlier than 3.1.0, the created user must have the permissions of Manager_administrator or System_administrator to create links on CDM.
    • A user with only the Manager_tenant or Manager_auditor permission cannot create connections.

    cdm

    Password

    Password used for logging in to MRS Manager

    -

    Enable ldap

    This parameter is available when Proxy connection is selected for Connection Type.

    If LDAP authentication is enabled for an external LDAP server connected to MRS Hive, the LDAP username and password are required for authenticating the connection to MRS Hive. In this case, this option must be enabled. Otherwise, the connection will fail.

    No

    ldapUsername

    This parameter is mandatory when Enable ldap is enabled.

    Enter the username configured when LDAP authentication was enabled for MRS Hive.

    -

    ldapPassword

    This parameter is mandatory when Enable ldap is enabled.

    Enter the password configured when LDAP authentication was enabled for MRS Hive.

    -

    OBS storage support

    The server must support OBS storage. When creating a Hive table, you can store the table in OBS.

    No

    AK

    This parameter is mandatory when OBS storage support is enabled. The account corresponding to the AK/SK pair must have the OBS Buckets Viewer permission. Otherwise, OBS cannot be accessed and the "403 AccessDenied" error is reported.

    You need to create an access key for the current account and obtain an AK/SK pair.

    1. Log in to the management console, move the cursor to the username in the upper right corner, and select My Credentials from the drop-down list.
    2. On the My Credentials page, choose Access Keys, and click Create Access Key. See Figure 7.
      Figure 7 Clicking Create Access Key
    3. Click OK and save the access key file as prompted. The access key file will be saved to your browser's configured download location. Open the credentials.csv file to view Access Key Id and Secret Access Key.
      NOTE:
      • Only two access keys can be added for each user.
      • To ensure access key security, the access key is automatically downloaded only when it is generated for the first time and cannot be obtained from the management console later. Keep them properly.

    -

    SK

    -

    Run Mode

    This parameter is used only when the Hive version is HIVE_3_X. Possible values are:
    • EMBEDDED: The link instance runs with CDM. This mode delivers better performance.
    • Standalone: The link instance runs in an independent process. If CDM needs to connect to multiple Hadoop data sources (MRS, Hadoop, or CloudTable) with both Kerberos and Simple authentication modes, Standalone prevails.
      NOTE:

      The STANDALONE mode is used to solve the version conflict problem. If the connector versions of the source and destination ends of the same link are different, a JAR file conflict occurs. In this case, you need to place the source or destination end in the STANDALONE process to prevent the migration failure caused by the conflict.

    EMBEDDED

    Check Hive JDBC Connectivity

    Whether to check the Hive JDBC connectivity

    No

    Use Cluster Config

    You can use the cluster configuration to simplify parameter settings for the Hadoop connection.

    No

    Cluster Config Name

    This parameter is valid only when Use Cluster Config is set to Yes. Select a cluster configuration that has been created.

    For details about how to configure a cluster, see Managing Cluster Configurations.

    hive_01

Creating a Table/File Migration Job

  1. On the DataArts Migration console, click Cluster Management in the left navigation pane, locate the required cluster in the cluster list, and click Job Management.
  2. On the Job Management page, click Table/File Migration and click Create Job.

    Figure 8 Table/File Migration

  3. Set job parameters:

    1. Configure the job name, source job parameters, and destination job parameters, and click Next. See Figure 9.
      • Job Name: source-sdi
      • Source Job Configuration
        • Source Link Name: obs-link
        • Bucket Name: fast-demo
        • Source Directory/File: /2017_Yellow_Taxi_Trip_Data.csv
        • File Format: CSV
        • Show Advanced Attributes: Click Show Advanced Attributes. The system provides default values for advanced attributes. Set parameters based on the actual data format.
          Pay attention to the settings of the following parameters based on the sample data format in Preparing a Data Source. For other parameters, retain the default values.
          • Field Delimiter: Retain the default value (,) in this example.
          • First N Rows As Header: Set this parameter to Yes because the first row is the title row in this example.
          • The Number of Header Rows: Enter 1.
          • Encode Type: Retain the default value UTF-8 in this example.
      • Destination Job Configuration
        • Destination Link Name: mrs-link
        • Database Name: demo_sdi_db
        • Table Name: sdi_taxi_trip_data
        • Clear Data Before Import
          NOTE:

          In this example, Clear Data Before Import is set to Yes, indicating that data will be cleared before being imported each time a job is executed. In actual services, set this parameter based on the site requirements to prevent data loss.

        Figure 9 Configuring basic job information
    2. In the Map Field step, configure field mappings and the time format of date fields, as shown in Figure 10. After the configuration is complete, click Next.
      • Field Mapping: In this example, the field sequence in the destination table is the same as that of source data. Therefore, you do not need to adjust the field mapping sequence.

        If the field sequence in the destination table is different from that of source data, map the source fields one by one to the destination fields with the same meaning. Move the cursor to the start point of the arrow of a field. When the cursor is displayed as a plus sign (+), press and hold the mouse button, point the arrow to the destination field with the same meaning, and then release the button.

      • Time Format: The second and third fields in the sample data are time fields. The data format is 02/14/2017 04:08:11 PM. Therefore, set Time Format to MM/dd/yyyy hh:mm:ss a for these two fields. You can also manually enter this format in the text box.

        Select the time format based on the actual data format. For example:

        yyyy/MM/dd HH:mm:ss indicates that the time is converted to the 24-hour format, for example, 2019/08/18 15:35:45.

        yyyy/MM/dd hh:mm:ss a indicates that the time is converted to the 12-hour format, for example, 2019/06/27 03:24:21 PM.

      Figure 10 Mapping fields
    3. Set Retry if failed and Schedule Execution of the task as required.
      Figure 11 Configuring the task

      Click Show Advanced Attributes and set Concurrent Extractors and Write Dirty Data, as shown in Figure 12.

      • Concurrent Extractors: Set this parameter based on the service volume. If the data source is of the file type and there are multiple files, you can increase the value of Concurrent Extractors to improve the extraction speed.
      • Write Dirty Data: You are advised to set this parameter to Yes and set related parameters by referring to Figure 12. Dirty data refers to the data that does not match the fields at the migration destination. Such data will be recorded to a specified OBS bucket. After dirty data writing is configured, normal data will be written to the destination, and migration jobs will not be interrupted due to dirty data. In this example, set OBS Bucket to fast-demo created in Preparing a Data Source. Go to the OBS console, click Create Folder to create a directory, for example, error-data, in the fast-demo bucket, and configure the dirty data directory in Figure 12 as the directory.
      Figure 12 Advanced attributes

  4. Click Save.

    On the Table/File Migration tab page, you can view the created job in the job list.

    Figure 13 Execution result of the migration task

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback