Configuring a Real-Time Migration Job
After configuring data connections, networks, and resource groups, you can create and configure a real-time migration job to combine multiple input and output data sources into a synchronization link for real-time data synchronization.
Prerequisites
- You have registered a Huawei ID and authorized the use of real-time data migration. For details, see Registering a Huawei ID and Enabling Huawei Cloud Services and Authorizing the Use of Real-Time Data Migration.
- You have purchased a resource group. For details, see Buying a DataArts Migration Resource Group Incremental Package.
- You have prepared data sources and the connection account has required permissions. For details, see the requirements for database account permissions in Check Before Use.
- A data connection has been created, and DataArts Migration has been selected for the connection. For details, see Creating a DataArts Studio Data Connection.
- The DataArts Migration resource group can communicate with the data source network. For details, see Enabling Network Communications.
Procedure
- Create a real-time processing migration job by referring to Creating a Real-Time Migration Job.
- Set the data connection type.
Select the data type of the source and that of the destination. For details about the supported source and destination data types, see Creating a Real-Time Migration Job.Figure 1 Selecting the data connection type
- Set the migration job type.
- Migration Type: The default value is Real-time and cannot be changed.
- Migration Scenario: Select Single table, Entire DB, or Database/Table shard.
Table 1 lists the scenarios.
Table 1 Synchronization scenario parameters Scenario
Description
Single table
A table in an instance can be synchronized to another instance.
Entire DB
Multiple tables in multiple databases in an instance can be synchronized to another instance in real time. A task can synchronize a maximum of 200 tables.
Database/Table shard
Multiple table shards of multiple databases in multiple instances can be synchronized to a database table in an instance.
Figure 2 Setting the migration job type
- Configure network resources.
- Select a source data connection, a destination data connection, and a resource group for which network connections have been configured.
Figure 3 Selecting data connections and a resource group
If no data connection is available, click Create to go to the Manage Data Connections page of the Management Center console and click Create Data Connection to create a connection. For details, see Configuring DataArts Studio Data Connection Parameters.
If no resource group is available, click Create to create one. For details, see Buying a DataArts Migration Resource Group Incremental Package.
- Check the network connectivity.
After configuring the data connections and resource group, perform the following operations to check the network connectivity between the data sources and the resource group:
- Click Source Configuration. The system will test the connectivity of the migration job.
- Click Test in the source and destination and resource group.
If the network connectivity is abnormal, see How Do I Troubleshoot the Disconnectivity Between a Data Source and Resource Group?
- Select a source data connection, a destination data connection, and a resource group for which network connections have been configured.
- Configure source and destination parameters.
The parameters vary depending on the source type. For details, see Tutorials.
- (Optional) Configure DDL message processing rules.
Real-time migration jobs can synchronize data manipulation language (DML) operations, such as adding, deleting, and modifying data, as well as some table structure changes using the data definition language (DDL). You can set the processing policy for a DDL operation to Normal processing, Ignore, or Error.
- Normal processing: When a DDL operation on the source database or table is detected, the operation is automatically synchronized to the destination.
- Ignore: When a DDL operation on the source database or table is detected, the operation is ignored and not synchronized to the destination.
- Error: When a DDL operation on the source database or table is detected, the migration job throws an exception.
Figure 4 DDL configuration
- Configure task parameters.
Table 2 Task parameters Parameter
Description
Default Value
Execution Memory
Memory allocated for job execution, which automatically changes with the number of CPU cores.
8 GB
CPU Cores
Value range: 2 to 32
For each CPU core added, 4 GB execution memory and one concurrency are automatically added.
2
Maximum Concurrent Requests
Maximum number of jobs that can be concurrently executed. This parameter does not need to be configured and automatically changes with the number of CPU cores.
1
Auto Retry
Whether to enable automatic retry upon a job failure
No
Maximum Retries
This parameter is displayed when Auto Retry is set to Yes.
1
Retry Interval (Seconds)
This parameter is displayed when Auto Retry is set to Yes.
120
Write Dirty Data
Whether to record dirty data. By default, dirty data is not recorded. If there is a large amount of dirty data, the synchronization speed of the task is affected.
Whether dirty data can be written depends on the data connection.
- No: Dirty data is not recorded. This is the default value.
Dirty data is not allowed. If dirty data is generated during the synchronization, the task fails and exits.
- Yes: Dirty data is allowed, that is, dirty data does not affect task execution.
When dirty data is allowed and its threshold is set:
- If the generated dirty data is within the threshold, the synchronization task ignores the dirty data (that is, the dirty data is not written to the destination) and is executed normally.
- If the generated dirty data exceeds the threshold, the synchronization task fails and exits.
NOTE:
Criteria for determining dirty data: Dirty data is meaningless to services, is in an invalid format, or is generated when the synchronization task encounters an error. If an exception occurs when a piece of data is written to the destination, this piece of data is dirty data. Therefore, data that fails to be written is classified as dirty data.
For example, if data of the VARCHAR type at the source is written to a destination column of the INT type, dirty data cannot be written to the migration destination due to improper conversion. When configuring a synchronization task, you can configure whether to write dirty data during the synchronization and configure the number of dirty data records (maximum number of error records allowed in a single partition) to ensure task running. That is, when the number of dirty data records exceeds the threshold, the task fails and exits.
No
Dirty Data Policy
This parameter is displayed when Write Dirty Data is set to Yes. The following policies are supported:
- Do not archive: Dirty data is only recorded in job logs, but not stored.
- Archive to OBS: Dirty data is stored in OBS and printed in job logs.
Do not archive
Write Dirty Data Link
This parameter is displayed when Dirty Data Policy is set to Archive to OBS.
Dirty data can only be written to OBS links.
N/A
Dirty Data Directory
OBS directory to which dirty data will be written
N/A
Dirty Data Threshold
This parameter is only displayed when Write Dirty Data is set to Yes.
You can set the dirty data threshold as required.
NOTE:- The dirty data threshold takes effect only for each concurrency. For example, if the threshold is 100 and the concurrency is 3, the maximum number of dirty data records allowed by the job is 300.
- Value -1 indicates that the number of dirty data records is not limited.
100
Add Custom Attribute
You can add custom attributes to modify some job parameters and enable some advanced functions. For details, see Job Performance Optimization.
N/A
- No: Dirty data is not recorded. This is the default value.
- Submit and run the job.
After configuring the job, click Submit in the upper left corner to submit the job.
Figure 5 Submitting the jobAfter submitting the job, click Start in the upper left corner. In the displayed dialog box, set required parameters and click OK.
Figure 6 Automatic table creationTable 3 Parameters for starting the job Parameter
Description
Synchronization Mode
Common data synchronization modes include:
- Incremental synchronization: Incremental data synchronization starts from a specified time point.
- Full and incremental synchronization: All data is synchronized first, and then incremental data is synchronized in real time.
Kafka data synchronization modes include:
- Earliest: Data consumption starts from the earliest offset of the Kafka topic.
- Latest: Data consumption starts from the latest offset of the Kafka topic.
- Start/End time: Data consumption starts from the offset of the Kafka topic obtained based on the time.
Time
Time when incremental synchronization starts. This parameter is mandatory when Synchronization Mode is set to Incremental synchronization or Start/End time.
NOTE:- If you set a time that is earlier than the earliest time of the incremental data log, data consumption starts from the latest log time by default.
- If you set a time earlier than the earliest offset of Kafka messages, data consumption starts from the earliest offset by default.
- Monitor the job.
On the job development page, click Monitor to go to the Job Monitoring page. You can view the status and log of the job, and configure alarm rules for the job. For details, see Real-Time Migration Job O&M.
Figure 7 Monitoring the job
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot