Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Situation Awareness
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive

Processing Data

Updated on 2025-02-17 GMT+08:00

DRS processes synchronized objects and allows you to add rules for selected objects. The processing rules supported by each data flow type are different. Currently, only some data flow types support data processing. For details, see Table 1.

Table 1 Data flow types that support data processing

Synchronization Direction

Data Flow

Data Filtering

Additional Column

Column Processing

To the cloud

MySQL->MySQL

Supported

Supported

Supported

To the cloud

MySQL -> GaussDB Distributed

Supported

Supported

Supported

To the cloud

MySQL -> GaussDB Centralized

Supported

Supported

Supported

To the cloud

MySQL->GaussDB(DWS)

Supported

Supported

Not supported

To the cloud

MySQL->TaurusDB

Supported

Supported

Supported

To the cloud

MySQL->MariaDB

Supported

Supported

Supported

To the cloud

DDM->MySQL

Not supported

Not supported

Supported

To the cloud

DDM->GaussDB(DWS)

Not supported

Supported

Not supported

To the cloud

Oracle->GaussDB(DWS)

Supported

Supported

Not supported

To the cloud

Oracle->MySQL

Supported

Not supported

Not supported

To the cloud

Oracle->TaurusDB

Supported

Not supported

Not supported

To the cloud

Oracle -> GaussDB Centralized

Supported

Not supported

Supported

To the cloud

Oracle -> GaussDB Distributed

Supported

Not supported

Supported

To the cloud

DB2 for LUW -> GaussDB Centralized

Supported

Not supported

Not supported

To the cloud

DB2 for LUW -> GaussDB Distributed

Supported

Not supported

Not supported

To the cloud

MariaDB->MariaDB

Supported

Not supported

Not supported

To the cloud

MariaDB->MySQL

Supported

Supported

Supported

To the cloud

MariaDB->TaurusDB

Supported

Supported

Supported

From the cloud

MySQL->MySQL

Supported

Supported

Supported

From the cloud

MySQL->Kafka

Not supported

Not supported

Supported

From the cloud

MySQL->CSS/ES

Supported

Not supported

Supported

From the cloud

MySQL->Oracle

Supported

Not supported

Not supported

From the cloud

MySQL->MariaDB

Supported

Supported

Supported

From the cloud

DDM->MySQL

Not supported

Not supported

Supported

From the cloud

GaussDB Centralized -> MySQL

Supported

Not supported

Not supported

From the cloud

GaussDB Centralized -> Oracle

Supported

Not supported

Supported

From the cloud

GaussDB Centralized -> Kafka

Not supported

Not supported

Supported

From the cloud

GaussDB Centralized -> GaussDB(DWS)

Supported

Not supported

Not supported

From the cloud

GaussDB Centralized -> GaussDB Distributed

Supported

Not supported

Supported

From the cloud

GaussDB Centralized -> GaussDB Centralized

Supported

Not supported

Supported

From the cloud

GaussDB Distributed -> MySQL

Supported

Not supported

Not supported

From the cloud

GaussDB Distributed -> Oracle

Supported

Not supported

Supported

From the cloud

GaussDB Distributed -> GaussDB(DWS)

Supported

Not supported

Not supported

From the cloud

GaussDB Distributed -> Kafka

Not supported

Not supported

Supported

From the cloud

GaussDB Distributed -> GaussDB Distributed

Supported

Not supported

Supported

From the cloud

GaussDB Distributed -> GaussDB Centralized

Supported

Not supported

Supported

From the cloud

TaurusDB->MySQL

Supported

Supported

Not supported

From the cloud

TaurusDB->GaussDB(DWS)

Not supported

Supported

Not supported

From the cloud

TaurusDB->CSS/ES

Supported

Not supported

Supported

From the cloud

MariaDB->MariaDB

Supported

Not supported

Not supported

Self-built -> Self-built

MySQL->Kafka

Not supported

Not supported

Supported

Self-built -> Self-built

MySQL->CSS/ES

Supported

Not supported

Supported

Self-built -> Self-built

MySQL -> GaussDB Distributed

Supported

Supported

Supported

Self-built -> Self-built

MySQL -> GaussDB Centralized

Supported

Supported

Supported

Self-built -> Self-built

Oracle -> GaussDB Centralized

Supported

Not supported

Supported

Self-built -> Self-built

Oracle -> GaussDB Distributed

Supported

Not supported

Supported

Self-built -> Self-built

GaussDB Centralized -> Kafka

Not supported

Not supported

Supported

Self-built -> Self-built

GaussDB Distributed -> Kafka

Not supported

Not supported

Supported

Self-built -> Self-built

DB2 for LUW -> GaussDB Centralized

Supported

Not supported

Not supported

Self-built -> Self-built

DB2 for LUW -> GaussDB Distributed

Supported

Not supported

Not supported

Adding Additional Columns

  1. On the Process Data page of the real-time synchronization task, click Additional Columns, locate the table to be processed, and click Add in the Operation column.

    Figure 1 Additional columns

  2. In the displayed Add dialog box, specify the column name, operation type, and field type. Click OK.

    Figure 2 Operation types

    NOTE:
    • In many-to-one mapping scenarios, additional columns for data processing are required to avoid data conflicts.
    • The following operation types are supported:
      • Default: Use the default value to fill in the new column.
      • Use the create_time column and update_time column as an example to fill the new column with the data creation time and data update time.
      • Expression: Use the concat(_current_database, '@',_current_table) expression to fill in the new column. You cannot manually enter an expression.
      • If you fill in the new column in serverName@database@table format, you need to enter a server name and then the database name and table name will be automatically filled in.
      • Value: Select a value, for example, synchronization time.
    • You can apply the additional column information of the first editable table to all editable tables in batches.
    • During MySQL to TaurusDB synchronization, if the number of columns in a single table exceeds 500, the number of additional columns added to the table may exceed the upper limit. As a result, the task fails.
    • If serverName@database@table is used to add an additional column, this additional column will be used on the destination database as an implicit filtering condition for row comparison and value comparison by default.
    • For a table with additional columns, the DDL operations of dropping a table and then creating a table are not supported in the incremental synchronization phase.
    • During task editing in the many-to-one mapping scenario, if the new table has been synchronized, many-to-one mapping has been performed, and additional columns have been set, you need to reset the additional columns for the table. Otherwise, the additional column settings in the last synchronization are retained by default.

  3. Click Next.

Filtering Data

After a data filtering rule is added, update the source database to ensure data consistency. For example:
  • The filter criteria are met after the update. You need to continue the synchronization and perform the same update operation on the destination database. If no data is matched, the operation will be ignored, causing data inconsistency.
  • The filter criteria are not met after the update. You need to continue the synchronization and perform the same update operation on the destination database.
  1. On the Processing Data page, set Processing Type to Data filtering.

    Figure 3 Filtering data

  2. In the Object area, select the table to be processed.
  3. In the Filtering Criteria area, enter the filter criteria (only the part after WHERE in the SQL statement, for example, id=1), and click Verify.

    NOTE:
    • Each table has only one verification rule.
    • Up to 512 tables can be filtered at a time. If there are more than 512 tables, perform rule verifications in batches.
    • The filter expression cannot use the package, function, variable, or constant of a specific DB engine. It must comply with the general SQL standard. Enter the part following WHERE in the SQL statement (excluding WHERE and semicolons), for example, sid > 3 and sname like "G %". A maximum of 512 characters are allowed.
    • In SQL statements for setting filter criteria, keywords must be enclosed in backquotes, and the value of datetime (including date and time) and character string type must be enclosed in single quotation marks, for example, `update` > '2022-07-13 00:00:00' and age >10, `update` ='abc'.
    • If the TIMESTAMP type is used as a filtering condition, the time of the character type must be set to the time value in the UTC time zone. For example, in MySQL, the TIMESTAMP data is stored based on the UTC time zone. You need to use the time value in the UTC time zone for comparison.
    • Implicit conversion rules are not supported. Enter filtering criteria of a valid data type. For example, if column c of an Oracle database uses characters of the varchar2 type, the filtering criteria must be set to c > '10' instead of c > 10.
    • Filter criteria cannot be configured for large objects, such as CLOB, BLOB, and BYTEA.
    • Filtering rules cannot be set for objects whose database names and table names contain newline characters.
    • The syntax of row-level locks, such as for update, cannot be used as filtering criteria.
    • Function operations cannot be performed on column names. If function operations are performed, data may be inconsistent.
    • You are not advised to set filter criteria for fields of approximate numeric types, such as FLOAT, DECIMAL, and DOUBLE.
    • Do not use fields containing special characters as a filter condition.
    • You are advised not to perform DDL operations on columns involved in filter criteria. Otherwise, task exceptions may occur.
    • You are not advised to use non-idempotent expressions or functions as data processing conditions, such as SYSTIMESTAMP and SYSDATE, because the returned result may be different each time the function is called.
    • The filtering rules for a synchronized table cannot be modified.
    • During data filtering for real-time synchronization with Oracle serving as the source database, the fixed-length character types NCHAR and CHAR must be matched using complete fixed-length characters.

  4. After the verification is successful, click Generate Processing Rule. The rule is displayed.
  5. Click Next.

Advanced Settings for Data Filtering

If you need to query an association table, you can use the advanced settings of data processing.

  1. On the Process Data page of the real-time synchronization task, set Processing Type to Data filtering.
  2. In the Object area, select the table to be processed.
  3. In the Filtering Criteria area, specify the filtering criteria, for example, id1 in (select id from db1.tab1 where id >=3 and id <10), and click Verify.

    NOTE:
    • Each table has only one verification rule.
    • Up to 512 tables can be filtered at a time. If there are more than 512 tables, perform rule verifications in batches.
    • The filter expression cannot use the package, function, variable, or constant of a specific DB engine. It must comply with the general SQL standard. Enter the part following WHERE in the SQL statement (excluding WHERE and semicolons), for example, sid > 3 and sname like "G %". A maximum of 512 characters are allowed.
    • Implicit conversion rules are not supported. Enter filtering criteria of a valid data type. For example, if column c of an Oracle database uses characters of the varchar2 type, the filtering criteria must be set to c > '10' instead of c > 10.
    • Filter criteria cannot be configured for large objects, such as CLOB, BLOB, and BYTEA.
    • Filtering rules cannot be set for objects whose database names and table names contain newline characters.
    • The syntax of row-level locks, such as for update, cannot be used as filtering criteria.
    • Data changes in a referenced table are not supported, which may cause data inconsistency during synchronization.
    • You are not advised to set filter criteria for fields of approximate numeric types, such as FLOAT, DECIMAL, and DOUBLE.
    • Do not use fields containing special characters as a filter condition.
    • You are not advised to use non-idempotent expressions or functions as data processing conditions, such as SYSTIMESTAMP and SYSDATE, because the returned result may be different each time the function is called.
    • During data filtering for real-time synchronization with Oracle serving as the source database, the fixed-length character types NCHAR and CHAR must be matched using complete fixed-length characters.

  4. After the verification is successful, click Generate Processing Rule. The rule is displayed.
  5. In the Advanced Settings area, specify the configuration condition and rule for the association table to help you filter data.

    Figure 4 Advanced settings
    1. In the Configuration Condition area, enter the association table information entered in 3.

      Database Name, Table Name, Column Name, Primary Key, Index, and Filter Criteria are mandatory. If the table does not have an index, enter its primary key.

      Filter Criteria is the filter condition of the association table information entered in 3.

    2. Then, click Verify.
    3. After the verification is successful, click Generate Configuration Rule. The rule is displayed in the Configuration Rule area.

      To filter data in multiple association tables, repeat 5.

      NOTE:

      Configuration rules can be deleted.

  6. Click Next.

Processing Columns

  1. On the Process Data page of the real-time synchronization task, select Processing Columns.
  2. Select a column processing mode.

    NOTE:

    Only MySQL-to-GaussDB and Oracle-to-GaussDB synchronization tasks support column processing by importing files. For other tasks, column processing is performed by selecting objects by default.

    • Select Objects
      1. In the Object area, select the objects to be processed.
        Figure 5 Processing columns
      2. Click Edit to the right of the selected object.
      3. In the Edit Column dialog box, select the columns to be mapped and enter new column names.
        Figure 6 Editing a column

        NOTE:
        • You can query or filter columns or create new column names.
        • After the column name is edited, the column name of the destination database is changed to the new name.
        • The new column name cannot be the same as the original column name or an existing column name.
        • Columns whose database names or table names contain newline characters cannot be mapped.
        • The column name in the synchronized table cannot be modified.
        • Only selected columns are synchronized. Newly-added columns are not included in column processing.
        • The partitioned table does not support column mapping or column filtering.
        • In the incremental phase, DDL operations cannot be performed on filtered, mapped, additional columns in a table.
        • For a table on which column filtering, column mapping, and additional column adding are performed, the DDL operations of dropping a table and then creating a table are not supported in the incremental synchronization phase.
        • If the source database is MySQL or TaurusDB, column filtering and mapping are not supported for columns that have function-based indexes.
        • When the source database is MySQL or TaurusDB and column mapping and processing are configured, if the table structure of the destination database contains columns configured with column mapping and processing, DRS will delete these columns. If there is service data in these columns, exercise caution when using column processing and mapping.
      4. Click OK.
    • Import object file
      1. On the Process Data page of the real-time synchronization task, choose Processing Columns > Import object file.
      2. Click Download Template.
        Figure 7 Processing columns

      3. In the downloaded Excel file, enter information about the objects to be imported.
      4. Click Select File. In the displayed dialog box, select the edited template.
      5. Click Upload.

  3. Click Next.

Viewing Data Filtering Results

  1. On the Data Synchronization Management page, click the task to be processed.
  2. Click the Process Data tab to view data filtering records. Click in the upper right corner to refresh the record list.

View Column Processing

  1. On the Data Synchronization Management page, click the target synchronization task name in the Task Name/ID column.
  2. In the navigation pane on the left, choose Synchronization Mapping. In the upper right corner, and select Columns to view column mapping records. Click in the upper right corner to refresh the record list.

    Figure 8 Viewing column mappings

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback