Updated on 2024-10-23 GMT+08:00

From Apache Kafka to MRS Kafka

This connection is only available in the sharding scenario.

This connection is available only after you apply for the trustlist membership. To use it, contact customer service or technical support.

Database/Table Sharding

  1. Configure source parameters.
    • Configure Kafka.
      • Data Format: supported data format

        Currently, the JSON, CSV, and TEXT formats are supported.

      • Consumer Group ID: ID of the consumer group of the real-time processing integration job

        After a migration job consumes messages of a topic in the DMS Kafka cluster, you can view the configured consumer group ID on the consumer group management page of the Kafka cluster and query the consumption attribute group.id on the message query page. Kafka regards the party that consumes messages as a consumer. Multiple consumers form a consumer group. A consumer group is a scalable and fault-tolerant consumer mechanism provided by Kafka. You are advised to configure a consumer group.

      • Source Kafka Attributes: You can add Kafka configuration items with the properties. prefix. The job automatically removes the prefix and transfers configuration items to the underlying Kafka client, for example, properties.connections.max.idle.ms=600000.
    • Add a data source.
  1. Configure destination parameters.
    • Set the rule for mapping source tables and topics.
      • Destination Topic Name Rule: You can specify a single topic or built-in field for mapping source table names and destination topic names.

        The #{source_topic_name} built-in variable can be used. It indicates the source topic name.

      • Kafka Partition Synchronization Policy: Select the Kafka partition policy.
        • To the partition corresponding to the source partition: Source messages are delivered to their corresponding destination partitions. This policy ensures that the message sequence remains unchanged.
        • To different partitions in polling mode: The Kafka sticky partitioning policy is used to evenly deliver messages to all destination partitions. This policy cannot keep the message sequence unchanged.
        • To partition 0
      • Partitions of a New Topic: Set the number of partitions for a new topic. The default value is 3.
      • Destination Kafka Attributes: You can add Kafka configuration items with the properties. prefix. The job automatically removes the prefix and transfers configuration items to the underlying Kafka client. An example configuration item is properties.connections.max.idle.ms=600000. After the job is submitted, the built-in parameter dataFormat is added at the destination.
    • Mapping Between Source and Destination Tables: You can change the names of mapped destination topics as needed. You can map one source topic to one destination topic or map multiple source topics to one destination topic.