Step 3: DataArts Migration
This topic describes how to use DataArts Studio DataArts Migration to migrate source data to the cloud in batches.
Creating a Cluster
CDM clusters can migrate data to the cloud and integrate data into the data lake. It provides wizard-based configuration and management and can integrate data from a single table or an entire database incrementally or periodically. The DataArts Studio basic package contains a CDM cluster. If the cluster cannot meet your requirements, you can buy a DataArts Migration incremental package.
For details about how to buy DataArts Studio incremental packages, see Buying a DataArts Studio Incremental Package.
Creating Source and Destination Links for Data Migration
- Log in to the DataArts Studio console. Locate an instance and click Access. On the displayed page, locate a workspace and click DataArts Migration.
Figure 1 DataArts Migration
- In the left navigation pane, choose Cluster Management. In the cluster list, locate the required cluster and click Job Management.
Figure 2 Cluster management
- On the Job Management page, click Links.
Figure 3 Links
- Create two links, one connecting to OBS to read source data stored on OBS, and the other connecting to MRS Hive to write data to the MRS Hive database.
Click Create Link. On the page displayed, select Object Storage Service (OBS) and click Next. Then, set the link parameters and click Save.Figure 4 Creating an OBS link
Table 1 Parameter description Parameter
Description
Example Value
Name
Link name, which should be defined based on the data source type, so it is easier to remember what the link is for
obs_link
OBS Endpoint
An endpoint is the request address for calling an API. Endpoints vary depending on services and regions. You can obtain the OBS bucket endpoint by either of the following means:
To obtain the endpoint of an OBS bucket, go to the OBS console and click the bucket name to go to its details page.
-
Port
Data transmission port. The HTTPS port number is 443 and the HTTP port number is 80.
443
OBS Bucket Type
Select a value from the drop-down list, generally, Object Storage.
Object Storage
AK
AK and SK are used to log in to the OBS server.
You need to create an access key for the current account and obtain an AK/SK pair.
To obtain an access key, perform the following steps:- Log in to the management console, move the cursor to the username in the upper right corner, and select My Credentials from the drop-down list.
- On the My Credentials page, choose Access Keys, and click Create Access Key. See Figure 5.
- Click OK and save the access key file as prompted. The access key file will be saved to your browser's configured download location. Open the credentials.csv file to view Access Key Id and Secret Access Key.
NOTE:
- Only two access keys can be added for each user.
- To ensure access key security, the access key is automatically downloaded only when it is generated for the first time and cannot be obtained from the management console later. Keep them properly.
-
SK
-
On the Links tab page, click Create Link again. On the page displayed, select MRS Hive and click Next. Then, set the link parameters and click Save.Figure 6 Creating an MRS Hive link
Table 2 MRS Hive link parameters Parameter
Description
Example Value
Name
Link name, which should be defined based on the data source type, so it is easier to remember what the link is for
hivelink
Manager IP
Floating IP address of MRS Manager. Click Select next to the Manager IP text box to select an MRS cluster. CDM automatically fills in the authentication information.
127.0.0.1
Authentication Method
Authentication method used for accessing MRS- SIMPLE: Select this for non-security mode.
- KERBEROS: Select this for security mode.
SIMPLE
HIVE Version
Set this to the Hive version on the server.
HIVE_3_X
Username
If Authentication Method is set to KERBEROS, you must provide the username and password used for logging in to MRS Manager. If you need to create a snapshot when exporting a directory from HDFS, the user configured here must have the administrator permission on HDFS.
To create a data connection for an MRS security cluster, do not use user admin. The admin user is the default management page user and cannot be used as the authentication user of the security cluster. You can create an MRS user and set Username and Password to the username and password of the created MRS user when creating an MRS data connection.NOTE:- If the CDM cluster version is 2.9.0 or later and the MRS cluster version is 3.1.0 or later, the created user must have the permissions of the Manager_viewer role to create links on CDM. To perform operations on databases, tables, and columns of an MRS component, you also need to add the database, table, and column permissions of the MRS component to the user by following the instructions in the MRS documentation.
- If the CDM cluster version is earlier than 2.9.0 or the MRS cluster version is earlier than 3.1.0, the created user must have the permissions of Manager_administrator or System_administrator to create links on CDM.
- A user with only the Manager_tenant or Manager_auditor permission cannot create connections.
cdm
Password
Password used for logging in to MRS Manager
-
Enable ldap
This parameter is available when Proxy connection is selected for Connection Type.
If LDAP authentication is enabled for an external LDAP server connected to MRS Hive, the LDAP username and password are required for authenticating the connection to MRS Hive. In this case, this option must be enabled. Otherwise, the connection will fail.
No
ldapUsername
This parameter is mandatory when Enable ldap is enabled.
Enter the username configured when LDAP authentication was enabled for MRS Hive.
-
ldapPassword
This parameter is mandatory when Enable ldap is enabled.
Enter the password configured when LDAP authentication was enabled for MRS Hive.
-
OBS storage support
The server must support OBS storage. When creating a Hive table, you can store the table in OBS.
No
AK
This parameter is mandatory when OBS storage support is enabled. The account corresponding to the AK/SK pair must have the OBS Buckets Viewer permission. Otherwise, OBS cannot be accessed and the "403 AccessDenied" error is reported.
You need to create an access key for the current account and obtain an AK/SK pair.
- Log in to the management console, move the cursor to the username in the upper right corner, and select My Credentials from the drop-down list.
- On the My Credentials page, choose Access Keys, and click Create Access Key. See Figure 7.
- Click OK and save the access key file as prompted. The access key file will be saved to your browser's configured download location. Open the credentials.csv file to view Access Key Id and Secret Access Key.
NOTE:
- Only two access keys can be added for each user.
- To ensure access key security, the access key is automatically downloaded only when it is generated for the first time and cannot be obtained from the management console later. Keep them properly.
-
SK
-
Run Mode
This parameter is used only when the Hive version is HIVE_3_X. Possible values are:- EMBEDDED: The link instance runs with CDM. This mode delivers better performance.
- Standalone: The link instance runs in an independent process. If CDM needs to connect to multiple Hadoop data sources (MRS, Hadoop, or CloudTable) with both Kerberos and Simple authentication modes, Standalone prevails.
NOTE:
The STANDALONE mode is used to solve the version conflict problem. If the connector versions of the source and destination ends of the same link are different, a JAR file conflict occurs. In this case, you need to place the source or destination end in the STANDALONE process to prevent the migration failure caused by the conflict.
EMBEDDED
Check Hive JDBC Connectivity
Whether to check the Hive JDBC connectivity
No
Use Cluster Config
You can use the cluster configuration to simplify parameter settings for the Hadoop connection.
No
Cluster Config Name
This parameter is valid only when Use Cluster Config is set to Yes. Select a cluster configuration that has been created.
For details about how to configure a cluster, see Managing Cluster Configurations.
hive_01
Creating a Table/File Migration Job
- On the DataArts Migration console, click Cluster Management in the left navigation pane, locate the required cluster in the cluster list, and click Job Management.
- On the Job Management page, click Table/File Migration and click Create Job.
Figure 8 Table/File Migration
- Set job parameters:
- Configure the job name, source job parameters, and destination job parameters, and click Next. See Figure 9.
- Job Name: source-sdi
- Source Job Configuration
- Source Link Name: obs-link
- Bucket Name: fast-demo
- Source Directory/File: /2017_Yellow_Taxi_Trip_Data.csv
- File Format: CSV
- Show Advanced Attributes: Click Show Advanced Attributes. The system provides default values for advanced attributes. Set parameters based on the actual data format.
Pay attention to the settings of the following parameters based on the sample data format in Preparing a Data Source. For other parameters, retain the default values.
- Field Delimiter: Retain the default value (,) in this example.
- First N Rows As Header: Set this parameter to Yes because the first row is the title row in this example.
- The Number of Header Rows: Enter 1.
- Encode Type: Retain the default value UTF-8 in this example.
- Destination Job Configuration
- Destination Link Name: mrs-link
- Database Name: demo_sdi_db
- Table Name: sdi_taxi_trip_data
- Clear Data Before Import
In this example, Clear Data Before Import is set to Yes, indicating that data will be cleared before being imported each time a job is executed. In actual services, set this parameter based on the site requirements to prevent data loss.
- In the Map Field step, configure field mappings and the time format of date fields, as shown in Figure 10. After the configuration is complete, click Next.
- Field Mapping: In this example, the field sequence in the destination table is the same as that of source data. Therefore, you do not need to adjust the field mapping sequence.
If the field sequence in the destination table is different from that of source data, map the source fields one by one to the destination fields with the same meaning. Move the cursor to the start point of the arrow of a field. When the cursor is displayed as a plus sign (+), press and hold the mouse button, point the arrow to the destination field with the same meaning, and then release the button.
- Time Format: The second and third fields in the sample data are time fields. The data format is 02/14/2017 04:08:11 PM. Therefore, set Time Format to MM/dd/yyyy hh:mm:ss a for these two fields. You can also manually enter this format in the text box.
Select the time format based on the actual data format. For example:
yyyy/MM/dd HH:mm:ss indicates that the time is converted to the 24-hour format, for example, 2019/08/18 15:35:45.
yyyy/MM/dd hh:mm:ss a indicates that the time is converted to the 12-hour format, for example, 2019/06/27 03:24:21 PM.
- Field Mapping: In this example, the field sequence in the destination table is the same as that of source data. Therefore, you do not need to adjust the field mapping sequence.
- Set Retry if failed and Schedule Execution of the task as required.
Figure 11 Configuring the task
Click Show Advanced Attributes and set Concurrent Extractors and Write Dirty Data, as shown in Figure 12.
- Concurrent Extractors: Set this parameter based on the service volume. If the data source is of the file type and there are multiple files, you can increase the value of Concurrent Extractors to improve the extraction speed.
- Write Dirty Data: You are advised to set this parameter to Yes and set related parameters by referring to Figure 12. Dirty data refers to the data that does not match the fields at the migration destination. Such data will be recorded to a specified OBS bucket. After dirty data writing is configured, normal data will be written to the destination, and migration jobs will not be interrupted due to dirty data. In this example, set OBS Bucket to fast-demo created in Preparing a Data Source. Go to the OBS console, click Create Folder to create a directory, for example, error-data, in the fast-demo bucket, and configure the dirty data directory in Figure 12 as the directory.
- Configure the job name, source job parameters, and destination job parameters, and click Next. See Figure 9.
- Click Save.
On the Table/File Migration tab page, you can view the created job in the job list.
Figure 13 Execution result of the migration task
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.