Configuring a Real-Time Processing Migration Job
After configuring data connections, networks, and resource groups, you can create a real-time synchronization job to combine multiple input and output data sources into a synchronization link to synchronize incremental data of a table or an entire database in real time. This section describes how to create a real-time incremental data synchronization job for a single table or the entire database and view the job status after the job is created.
Prerequisites
- A data connection has been created, and DataArts Migration has been selected for the connection. For details, see Creating a DataArts Studio Data Connection.
- You have purchased a resource group. For details, see Buying a DataArts Migration Resource Group Incremental Package.
- The DataArts Migration resource group can communicate with the data source network. For details, see Configuring Real-Time Network Connections.
- A real-time processing migration job has been created. For details, see Creating a Real-Time Processing Migration Job.
Procedure
- Create a real-time processing migration job by referring to Creating a Real-Time Processing Migration Job.
- Configure types.
- Select a data connection type.
Select the data type of the source and that of the destination. For details about the supported source and destination data types, see Supported Data Sources.
Figure 1 Selecting the data connection type
- Set the migration job type.
- Migration Type: The default value is Real-time and cannot be changed.
- Migration Scenario: Select one from Database/Table partition and Entire DB. For details about the supported data sources, see Supported Data Sources.
Table 1 lists the scenarios.
Table 1 Synchronization scenario parameters Scenario
Description
Entire DB
Migrate multiple database tables from the source to the destination.
Database/Table partition
Migrate table partitions of databases from multiple sources to one destination table. The mapping between source databases/tables and the destination table can be flexibly configured.
- Configure network resources.
You need to select the source and destination data connections, select a resource group, and test network connectivity.
The procedure is as follows:
- Configure the source data connection.
Select a data connection, for example, a MySQL data connection.
Figure 2 Selecting the source data connection
If no data connection is available, click Create to go to the Manage Data Connections page of the Management Center console and click Create Data Connection to create a connection. For details, see Configuring DataArts Studio Data Connection Parameters.
Figure 3 Creating a data connection
- Select a resource group.
Select a resource group for which the network connection has been configured.
Figure 4 Selecting a resource group
If no resource group is available, click Create to create one. For details, see Buying a DataArts Migration Resource Group Incremental Package.
- Configure the destination data connection.
Select a data connection, for example, a Hudi connection.
Figure 5 Selecting the destination data connection
If no data connection is available, click Create to go to the Manage Data Connections page of the Management Center console and click Create Data Connection to create a connection. For details, see Creating a DataArts Studio Data Connection.
- Check the network connectivity. If the connectivity is normal, go to the next step to configure the source.
After configuring the data connection and resource group, you need to test network connectivity for the entire migration task. You can perform the following operations to test the connectivity between the data source and resource group:
- Click Test in the source and destination and resource group.
- Click Source Configuration. The system will test the connectivity of the entire migration task.
Table 2 Troubleshooting of network disconnections for different data connection types Type
Exception
Method
Data source - CDM exception
The instance status is abnormal.
Check whether the cluster is running properly.
The connectivity check is abnormal.
- Check whether the CDM cluster and data source are in the same VPC.
- If the CDM cluster and the data source are not in the same VPC, create a VPC peering connection to connect them. Add the private IP address of the CDM cluster to the inbound security group rule of the data source and add the data source IP address to the outbound security group rule of the CDM cluster. For details, see Creating a DataArts Studio Data Connection.
Data source - resource group exception
The resource group is abnormal.
Access a DataArts Studio instance and click the Resources tab. On the displayed Real-Time Resources tab page, check whether the resource group is running.
RDS(MySQL)
- Access the Management Center page, choose Manage Data Connections, and check whether the IP address or domain name of a MySQL connection is the private IP address of the RDS instance and whether an agent is associated with the connection.
- Access a DataArts Studio instance and click the Resources tab and then the Real-Time Network Connections tab. Check whether a network connection to the VPC and subnet where the MySQL database resides has been created and whether the network connection has been associated with a resource group.
- Access the details page of an RDS instance, locate the connection information, and click the security group. On the displayed security group page, click the Inbound Rules tab and check whether the source IP address has the network segment of the resource group.
MRS HUDI
- Go to Management Center > Manage Data Connections and check whether the configuration of the MRS Hudi data connection is correct.
- On the DataArts Studio instance page, click the Resources tab and then the Real-Time Network Connections tab, and check whether a network connection has been created between the resource group and MRS VPC subnet.
- Check whether the network segment of the resource group has been added to the inbound rule of the security group of the MRS cluster.
DMS Kafka
- Go to Management Center > Manage Data Connections and create a Kafka data connection. Set Kafka Broker to the internal IP address of Kafka.
- On the DataArts Studio instance page, click the Resources tab and then the Real-Time Network Connections tab, and create a network connection between the resource group and the VPC subnet where Kafka is located.
- Add the network segment of the resource group to the inbound rule of the DMS Kafka security group.
- Configure the source data connection.
- Select a data connection type.
- Configure source and destination parameters.
For details, see Configuring Source and Destination Parameters.
- (Optional) Configure DDL.
During the real-time synchronization of relational data, the real-time messages contain DDL operations. You can set the operations for synchronizing these DDL messages to the destination table.
In addition to data addition, deletion, query, and modification, real-time processing migration jobs can synchronize table structure changes (DDL). Currently, new columns can be synchronized when the destination is Hudi. Select Normal process for the DDL operations to be synchronized and retain the default values for other parameters.
- Configure task parameters.
Table 3 Task parameters Parameter
Description
Default Value
Execution Memory
Memory allocated for job execution, which automatically changes with the number of CPU cores.
8 GB
CPU Cores
Value range: 2 to 32
For each CPU core added, 4 GB execution memory and one concurrency are automatically added.
2
Maximum Concurrent Requests
Maximum number of jobs that can be concurrently executed. This parameter does not need to be configured and automatically changes with the number of CPU cores.
1
Auto Retry
Whether to enable automatic retry upon a job failure
No
Maximum Retries
This parameter is displayed when Auto Retry is set to Yes.
1
Retry Interval (Seconds)
This parameter is displayed when Auto Retry is set to Yes.
120
Write Dirty Data
Whether to record dirty data. By default, dirty data is not recorded. If there is a large amount of dirty data, the synchronization speed of the task is affected.
Dirty data can be written for data migration from MySQL to GaussDB(DWS), Hudi, and Kafka.
- No: Dirty data is not recorded. This is the default value.
Dirty data is not allowed. If dirty data is generated during the synchronization, the task fails and exits.
- Yes: Dirty data is allowed, that is, dirty data does not affect task execution.
When dirty data is allowed and its threshold is set:
- If the generated dirty data is within the threshold, the synchronization task ignores the dirty data (that is, the dirty data is not written to the destination) and is executed normally.
- If the generated dirty data exceeds the threshold, the synchronization task fails and exits.
NOTE:
Criteria for determining dirty data: Dirty data is meaningless to services, is in an invalid format, or is generated when the synchronization task encounters an error. If an exception occurs when a piece of data is written to the destination, this piece of data is dirty data. Therefore, data that fails to be written is classified as dirty data.
For example, if data of the VARCHAR type at the source is written to a destination column of the INT type, dirty data cannot be written to the migration destination due to improper conversion. When configuring a synchronization task, you can configure whether to write dirty data during the synchronization and configure the number of dirty data records (maximum number of error records allowed in a single partition) to ensure task running. That is, when the number of dirty data records exceeds the threshold, the task fails and exits.
No
Dirty Data Policy
This parameter is only displayed when Write Dirty Data is set to Yes.
The following policies are supported:
- Do not archive
- Archive to OBS
Do not archive
Write Dirty Data Link
This parameter is displayed when Dirty Data Policy is set to Archive to OBS.
Only links to OBS support dirty data writes.
obslink
Dirty Data Directory
Directory to which dirty data is written
obs://default/
Dirty Data Threshold
This parameter is only displayed when Write Dirty Data is set to Yes.
You can set the dirty data threshold as required.
The dirty data threshold takes effect for each concurrency. For example, if the threshold is 100 and the concurrency is 3, the maximum number of dirty data records allowed by the job is 300.
Value -1 indicates that the number of dirty data records is not limited.
100
- No: Dirty data is not recorded. This is the default value.
- Submit the job.
After configuring the job, click Submit in the upper left corner to submit the job.
Figure 6 Submitting the job
- Enable automatic table creation.
Take destination Hudi as an example.
After you submit a job, you can start automatic table creation.
Click Start Table Creation. The system analyzes the Hudi table configuration and automatically creates a Hudi table. If the table fails to be created, you can view the failure message and create a table manually, or contact technical support.
After the table is automatically created, click OK to save the job.
- Run the job.
After configuring the job, click Start in the upper left corner. In the displayed dialog box, set synchronization parameters as required and click OK to start the job.
Figure 7 Starting the job
Table 4 Parameters for starting the job Link
Parameter
Description
Apache Kafka - MRS Kafka
Offset parameter: Select the earliest, latest, or start/end time.
If this parameter is set to Latest, the job may restart when it encounters dirty data. If the checkpoint is not triggered, the job will use the restart time as the latest time and consume data from the restart time. This will result in data inconsistency. Exercise caution when selecting Latest.
DMS Kafka - OBS
MYSQL - DWS
- Synchronization Mode
- Incremental
- Full and incremental
- Time: If you set a time that is earlier than the earliest log time, the earliest log time is used. This parameter is required for incremental synchronization.
Incremental: Select the synchronization time, that is, the SQL execution time at the source and the Kafka write time.
Full and incremental: All data is synchronized first, and then incremental data is synchronized.
MYSQL - Hudi
MYSQL - DMS Kafka
After the job is started, it is in running state and starts migrating data.
- Synchronization Mode
- Monitoring the job.
You can set different monitoring rules for running jobs. For details about the alarms for real-time processing migration jobs, see Managing and Viewing Monitoring Metrics.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot