Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Situation Awareness
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive
On this page

Configuring a Real-Time Migration Job

Updated on 2025-02-18 GMT+08:00

After configuring data connections, networks, and resource groups, you can create and configure a real-time migration job to combine multiple input and output data sources into a synchronization link for real-time data synchronization.

Prerequisites

Procedure

  1. Create a real-time processing migration job by referring to Creating a Real-Time Migration Job.
  2. Set the data connection type.

    Select the data type of the source and that of the destination. For details about the supported source and destination data types, see Creating a Real-Time Migration Job.
    Figure 1 Selecting the data connection type

  3. Set the migration job type.

    1. Migration Type: The default value is Real-time and cannot be changed.
    2. Migration Scenario: Select Single table, Entire DB, or Database/Table shard.
      Table 1 lists the scenarios.
      Table 1 Synchronization scenario parameters

      Scenario

      Description

      Single table

      A table in an instance can be synchronized to another instance.

      Entire DB

      Multiple tables in multiple databases in an instance can be synchronized to another instance in real time. A task can synchronize a maximum of 200 tables.

      Database/Table shard

      Multiple table shards of multiple databases in multiple instances can be synchronized to a database table in an instance.

      Figure 2 Setting the migration job type

  4. Configure network resources.

    Select a source data connection, a destination data connection, and a resource group for which network connections have been configured.
    Figure 3 Selecting data connections and a resource group
    NOTE:

    If no data connection is available, click Create to go to the Manage Data Connections page of the Management Center console and click Create Data Connection to create a connection. For details, see Configuring DataArts Studio Data Connection Parameters.

    If no resource group is available, click Create to create one. For details, see Buying a DataArts Migration Resource Group Incremental Package.

  5. Check the network connectivity.

    After configuring the data connections and resource group, perform the following operations to check the network connectivity between the data sources and the resource group:

  6. Configure source and destination parameters.

    The parameters vary depending on the source type. For details, see Tutorials.

  7. Update the mapping between the source table and destination table, check whether the mapping is correct, and modify table attributes and add additional fields as required.
  8. (Optional) Configure DDL message processing rules.

    Real-time migration jobs can synchronize data manipulation language (DML) operations, such as adding, deleting, and modifying data, as well as some table structure changes using the data definition language (DDL). You can set the processing policy for a DDL operation to Normal processing, Ignore, or Error.

    • Normal processing: When a DDL operation on the source database or table is detected, the operation is automatically synchronized to the destination.
    • Ignore: When a DDL operation on the source database or table is detected, the operation is ignored and not synchronized to the destination.
    • Error: When a DDL operation on the source database or table is detected, the migration job throws an exception.
      Figure 4 DDL configuration

  9. Configure task parameters.

    Table 2 Task parameters

    Parameter

    Description

    Default Value

    Execution Memory

    Memory allocated for job execution, which automatically changes with the number of CPU cores.

    8 GB

    CPU Cores

    Value range: 2 to 32

    For each CPU core added, 4 GB execution memory and one concurrency are automatically added.

    2

    Maximum Concurrent Requests

    Maximum number of jobs that can be concurrently executed. This parameter does not need to be configured and automatically changes with the number of CPU cores.

    1

    Auto Retry

    Whether to enable automatic retry upon a job failure

    No

    Maximum Retries

    This parameter is displayed when Auto Retry is set to Yes.

    1

    Retry Interval (Seconds)

    This parameter is displayed when Auto Retry is set to Yes.

    120

    Write Dirty Data

    Whether to record dirty data. By default, dirty data is not recorded. If there is a large amount of dirty data, the synchronization speed of the task is affected.

    Whether dirty data can be written depends on the data connection.

    • No: Dirty data is not recorded. This is the default value.

      Dirty data is not allowed. If dirty data is generated during the synchronization, the task fails and exits.

    • Yes: Dirty data is allowed, that is, dirty data does not affect task execution.
      When dirty data is allowed and its threshold is set:
      • If the generated dirty data is within the threshold, the synchronization task ignores the dirty data (that is, the dirty data is not written to the destination) and is executed normally.
      • If the generated dirty data exceeds the threshold, the synchronization task fails and exits.
        NOTE:

        Criteria for determining dirty data: Dirty data is meaningless to services, is in an invalid format, or is generated when the synchronization task encounters an error. If an exception occurs when a piece of data is written to the destination, this piece of data is dirty data. Therefore, data that fails to be written is classified as dirty data.

        For example, if data of the VARCHAR type at the source is written to a destination column of the INT type, dirty data cannot be written to the migration destination due to improper conversion. When configuring a synchronization task, you can configure whether to write dirty data during the synchronization and configure the number of dirty data records (maximum number of error records allowed in a single partition) to ensure task running. That is, when the number of dirty data records exceeds the threshold, the task fails and exits.

    No

    Dirty Data Policy

    This parameter is displayed when Write Dirty Data is set to Yes. The following policies are supported:

    • Do not archive: Dirty data is only recorded in job logs, but not stored.
    • Archive to OBS: Dirty data is stored in OBS and printed in job logs.

    Do not archive

    Write Dirty Data Link

    This parameter is displayed when Dirty Data Policy is set to Archive to OBS.

    Dirty data can only be written to OBS links.

    N/A

    Dirty Data Directory

    OBS directory to which dirty data will be written

    N/A

    Dirty Data Threshold

    This parameter is only displayed when Write Dirty Data is set to Yes.

    You can set the dirty data threshold as required.

    NOTE:
    • The dirty data threshold takes effect only for each concurrency. For example, if the threshold is 100 and the concurrency is 3, the maximum number of dirty data records allowed by the job is 300.
    • Value -1 indicates that the number of dirty data records is not limited.

    100

    Add Custom Attribute

    You can add custom attributes to modify some job parameters and enable some advanced functions. For details, see Job Performance Optimization.

    N/A

  10. Submit and run the job.

    After configuring the job, click Submit in the upper left corner to submit the job.

    Figure 5 Submitting the job

    After submitting the job, click Start on the job development page. In the displayed dialog box, set required parameters and click OK.

    Figure 6 Starting the job
    Table 3 Parameters for starting the job

    Parameter

    Description

    Synchronization Mode

    Common data synchronization modes include:

    • Incremental synchronization: Incremental data synchronization starts from a specified time point.
    • Full and incremental synchronization: All data is synchronized first, and then incremental data is synchronized in real time.

    Kafka data synchronization modes include:

    • Earliest: Data consumption starts from the earliest offset of the Kafka topic.
    • Latest: Data consumption starts from the latest offset of the Kafka topic.
    • Start/End time: Data consumption starts from the offset of the Kafka topic obtained based on the time.

    Time

    Time when incremental synchronization starts. This parameter is mandatory when Synchronization Mode is set to Incremental synchronization or Start/End time.

    NOTE:
    • If you set a time that is earlier than the earliest time of the incremental data log, data consumption starts from the latest log time by default.
    • If you set a time earlier than the earliest offset of Kafka messages, data consumption starts from the earliest offset by default.

  11. Monitor the job.

    On the job development page, click Monitor to go to the Job Monitoring page. You can view the status and log of the job, and configure alarm rules for the job. For details, see Real-Time Migration Job O&M.

    Figure 7 Monitoring the job

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback