Updated on 2024-04-17 GMT+08:00

Configuring Synchronization Policies in Batches

Function

  • This API is used to configure synchronization policies in batches, including conflict policies, DROP DATASE filtering, and object synchronization scope.
  • This API is used to configure Kafka synchronization policies.

Debugging

You can debug the API in API Explorer to support automatic authentication. API Explorer can automatically generate and debug example SDK code.

Constraints

  • This API can be called only after a task is created, the task status is CONFIGURATION, the test of connections to the source and destination databases is successful, and the API for modifying the task is successfully called.
  • Kafka synchronization policies can be configured for the following data flow scenarios: synchronization from PostgreSQL to Kafka, from Oracle to Kafka, from GaussDB to Kafka, from GaussDB(for MySQL) to Kafka, and from MySQL to Kafka.
  • GaussDB(for MySQL)-to-Kafka synchronization and MySQL-to-Kafka synchronization allow you to modify the Kafka policy configuration when the task is in the INCRE_TRANSFER_STARTED state. After the configuration is modified, you can edit the synchronization objects only when the task status changes to INCRE_TRANSFER_STARTED.

URI

POST /v3/{project_id}/jobs/batch-sync-policy

Table 1 Path parameters

Parameter

Mandatory

Type

Description

project_id

Yes

String

Project ID of a tenant in a region

For details about how to obtain the project ID, see Obtaining a Project ID.

Request Parameters

Table 2 Request header parameters

Parameter

Mandatory

Type

Description

Content-Type

Yes

String

The content type.

The default value is application/json.

X-Auth-Token

Yes

String

User token obtained from IAM.

X-Language

No

String

Request language type

Default value: en-us

Values:

  • en-us
  • zh-cn
Table 3 Request body parameters

Parameter

Mandatory

Type

Description

jobs

Yes

Array of objects

List of requests for setting synchronization policies in batches.

For details, see Table 4.

Table 4 Data structure description of field jobs

Parameter

Mandatory

Type

Description

job_id

Yes

String

Task ID.

conflict_policy

No

String

Conflict policy.

Values:

  • ignore: Ignore the conflict.
  • overwrite: Overwrite the existing data with the conflicting data.
  • stop: Report an error.

filter_ddl_policy

No

String

DDL filtering policy.

Value: drop_database

ddl_trans

No

Boolean

Whether to synchronize DDL during incremental synchronization.

index_trans

No

Boolean

Whether to synchronize indexes during incremental synchronization.

topic_policy

No

String

Topic synchronization policy. This parameter is mandatory when destination database is Kafka.

Values for synchronization from GaussDB Distributed to Kafka:

  • 0: A specified topic
  • 1: Automatically generated using the database_name-schema_name-table_name format
  • 2: Automatically generated based on the database name
  • 3: Automatically generated using the database_name-schema_name format
  • 4: Automatically generated using the database_name-DN_sequence_number format

Values for synchronization from GaussDB Primary/Standby to Kafka and from PostgreSQL to Kafka:

  • 0: A specified topic
  • 1: Automatically generated using the database_name-schema_name-table_name format
  • 2: Automatically generated based on the database name
  • 3: Automatically generated using the database_name-schema_name format

Values for synchronization from Oracle to Kafka:

  • 0: A specified topic
  • 1: Automatically generated using the schema_name-table_name format
  • 3: Automatically generated based on the schema name

Values for synchronization from MySQL to Kafka:

  • 0: A specified topic
  • 1: Auto-generated topics

Values for synchronization from GaussDB(for MySQL) to Kafka:

  • 0: A specified topic
  • 1: Auto-generated topics

topic

No

String

Topic name. This parameter is mandatory when topic_policy is set to 0. Ensure that the topic exists.

partition_policy

No

String

The policy for synchronizing topics to the Kafka partitions. This parameter is mandatory when the destination database is Kafka..

  • 0: Partitions are differentiated by the hash values of database_name.schema_name.table_name
  • 1: Topics are synchronized to partition 0
  • 2: Partitions are identified by the hash values of the primary key
  • 3: Partitions are differentiated by the hash values of database_name.schema_name
  • 4: Partitions are differentiated by the hash values of database_name.DN_sequence_number (This value is available only for synchronization from GaussDB Distributed to Kafka.)
  • 5: Partitions are differentiated by the hash values of non-primary-key columns

When topic_policy is set to 0, the value can be 0, 1, 2, 3, 4, or 5. When topic_policy is set to 1, the value can be 1, 2, or 5. When topic_policy is set to 2, the value can be 0, 1, 3, or 4. When topic_policy is set to 3, the value can be 0 or 1. When topic_policy is set to 4, the value can be 0, 1, or 3.

kafka_data_format

No

String

Data format delivered to Kafka. Available options:

  • json
  • avro
  • json_c

If this parameter is left blank, the default value is json.

NOTE:
  • The value can be json and json_c for synchronization from MySQL to Kafka and from GaussDB(for MySQL) to Kafka.
  • The value can be json and avro for other synchronization scenarios.

topic_name_format

No

String

The topic name format. This parameter is required if topic_policy is set to 1, 2, or 3.

Values for synchronization from PostgreSQL to Kafka and synchronization from GaussDB primary/standby to Kafka:

  • If topic_policy is set to 1, the topic name supports the database and schema names as variables. Other characters are considered as constants. $database$ indicates the database name, and $schema$ indicates the schema name. The default value is $database$-$schema$.
  • If topic_policy is set to 2, the topic name supports the database name as a variable. Other characters are regarded as constants. If this parameter is left blank, the default value is $database$.
  • If topic_policy is set to 3, the topic name supports the names of database, schema, and table as variables. Other characters are considered as constants. $database$ indicates the database name, $schema$ indicates the schema name, and $tablename$ indicates the table name. The default value is $database$-$schema$-$tablename$.

Values for synchronization from Oracle to Kafka:

  • If topic_policy is set to 1, the topic name supports the schema and table names as variables. Other characters are considered as constants. Replace $schema$ with the schema name and $tablename$ with the table name. If this parameter is left blank, the default value is $schema$-$tablename$.
  • If topic_policy is set to 3, the topic name supports the schema name as variables. Other characters are considered as constants. Replace $schema$ with the schema name. If this parameter is left blank, the default value is $schema$.

Values for synchronization from MySQL to Kafka and from GaussDB(for MySQL) to Kafka:

  • If topic_policy is set to 1, the topic name supports the database and table names as variables. Other characters are considered as constants. Replace $database$ with the database name and $tablename$ with the table name. If this parameter is left blank, the default value is $database$-$tablename$.

partitions_num

No

String

Number of partitions. The value ranges from 1 to 2147483647. This parameter is mandatory if topic_policy is set to 1, 2, or 3. If this parameter is left blank, the default value is 1.

replication_factor

No

String

Number of replicas. The value ranges from 1 to 32767. This parameter is mandatory if topic_policy is set to 1, 2, or 3. If this parameter is left blank, the default value is 1.

is_fill_materialized_view

No

Boolean

Whether to fill the materialized view in the PostgreSQL full migration/synchronization phase. If this parameter is not specified, the default value is false.

export_snapshot

No

Boolean

Whether to export data in snapshot mode in the PostgreSQL full migration/synchronization phase. If this parameter is not specified, the default value is false.

slot_name

No

String

Replication slot name. This parameter is mandatory for tasks from GaussDB Primary/Standby to Kafka.

file_and_position

No

String

  • When MySQL serves as the source database, run show master status to obtain the start point of the source database and set File and Position as prompted. For example, mysql-bin.000277:805, in which the file name can contain only 1 to 60 characters and cannot contain the following special character <>&:"'/\\, the file number can contain only 3 to 20 digits, the binlog event position can contain only 1 to 20 digits, and the total length cannot exceed 100 characters. The value is in the format of File_name.file_number:Event_position.
  • When MongoDB serves as the source database, the source database logs are obtained from within the time range of the oplog, starting with the current start position. To check whether the start position is within the oplog time range, run db.getReplicationInfo() for a non-cluster instance, and for a cluster instance, run db.watch([], {startAtOperationTime: Timestamp(xx, xx)}), where xx is the start position you specified. The value is in the format of timestamp:incre. The values of timestamp and incre are integers ranging from 1 to 2,147,483,647.

gtid_set

No

String

  • This parameter is mandatory for tasks whose source database is MySQL. Run show master status to obtain the start point of the source database and set Executed_Gtid_Set as prompted. (If the source database is MySQL 5.5, synchronization tasks are not supported.)
  • Enter a maximum of 2048 characters. Chinese characters and the following special characters are not allowed: < > & " ' / \\

ddl_topic

No

String

Topic for storing DDLs. This parameter is mandatory when Kafka is the destination database and ddl_trans is set to true.

Value: name of an existing topic in the destination database.

Response Parameters

Status code: 200

Table 5 Response body parameters

Parameter

Type

Description

count

Integer

Total number.

results

Array of objects

List of returned synchronization policies that are configured in batches.

For details, see Table 6.

Table 6 Data structure description of field results

Parameter

Type

Description

id

String

Task ID.

status

String

Status Values:

  • success: The task is successful.
  • failed: The task fails.

error_code

String

Error code.

error_msg

String

Error message.

Example Request

  • Configuring synchronization task policies in batches, in which conflict_policy is set to ignore, ddl_trans is set to true, and filter_ddl_policy is set to drop_database
    https://{endpoint}/v3/054ba152d480d55b2f5dc0069e7ddef0/jobs/batch-sync-policy
    
    {
        "jobs": [{
    	"conflict_policy": "ignore",
    	"ddl_trans": true,
    	"filter_ddl_policy": "drop_database",
    	"index_trans": true,
    	"job_id": "19557d51-1ee6-4507-97a6-8f69164jb201"
        }]
    }
  • Configuring MySQL incremental synchronization task policies in batches:
    https://{endpoint}/v3/054ba152d480d55b2f5dc0069e7ddef0/jobs/batch-sync-policy 
      
     { 
       "jobs": [ 
         { 
           "conflict_policy": "ignore", 
           "ddl_trans": true, 
           "filter_ddl_policy": "drop_database", 
           "index_trans": true, 
           "job_id": "19557d51-1ee6-4507-97a6-8f69164jb201",
           "file_and_position": "mysql-bin.000019:197", 
           "gtid_set":"e4979f26-4bc3-11ee-b279-fa163ef21d64:1-23" 
         } 
       ] 
     }

Example Response

Status code: 200

OK

{
  "results" : [ {
    "id" : "19557d51-1ee6-4507-97a6-8f69164jb201",
    "status" : "success"
  } ],
  "count" : 1
}

Status Code

Status Code

Description

200

OK

400

Bad Request

Error Code

For details, see Error Code.