Updated on 2024-04-07 GMT+08:00

Notes and Constraints

This section describes the notes and constraints on DMS for Kafka.

Instance

Table 1 Instance notes and constraints

Item

Notes and Constraints

Kafka ZooKeeper

Kafka clusters are managed using ZooKeeper. Opening ZooKeeper may cause misoperations and service losses. Currently, ZooKeeper is used only within Kafka clusters and does not provide services externally.

Version

  • The service version can be 1.1.0, 2.3.0, or 2.7. Kafka instances cannot be upgraded once they are created.
  • Clients later than version 0.10 are supported. Use a version that is consistent with the service version.

Logging in to the VM where the Kafka brokers reside

Not supported

Storage

  • The storage space can be expanded but cannot be reduced.
  • You can expand the storage space up to 20 times.

Bandwidth or broker quantity

The bandwidth and broker quantity can be increased but cannot be decreased.

Broker flavor

  • The broker flavor can be increased or decreased.
  • Single-replica topics do not support message creation and retrieval during this period. Services will be interrupted.
  • If a topic has multiple replicas, scaling up or down the broker flavor does not interrupt services, but may cause disorder of partition messages. Evaluate this impact and avoid peak hours.
  • Broker rolling restarts will cause partition leader changes, interrupting connections for less than a minute when the network is stable. For multi-replica topics, configure the retry mechanism on the producer client.
  • If the total number of partitions created for an instance is greater than the upper limit allowed by a new flavor, scale-down cannot be performed.

VPC, subnet, and AZ

After an instance is created, its VPC, subnet, and AZ cannot be modified.

Kerberos authentication

Not supported

Client connections from each IP address

Each Kafka broker allows a maximum of 1000 connections from each IP address by default. Excess connections will be rejected.

Topic

Table 2 Topic notes and constraints

Item

Notes and Constraints

Total number of topic partitions

The total number of topic partitions is related to the instance specifications. For details, see Specifications.

Kafka manages messages by partition. If there are too many partitions, message creation, storage, and retrieval will be fragmented, affecting the performance and stability. If the total number of partitions of topics reaches the upper limit, you cannot create more topics.

Number of partitions in a topic

Based on the open-source Kafka constraints, the number of partitions in a topic can be increased but cannot be decreased.

Topic quantity

The topic quantity is related to the total number of topic partitions and number of partitions in each topic. For details, see Specifications.

Automatic topic creation

Supported. If automatic topic creation is enabled, the system automatically creates a topic when a message is created in or retrieved from a topic that does not exist. This topic has the following default settings: 3 partitions, 3 replicas, aging time 72 hours, and synchronous replication and flushing disabled.

After you change the value of the log.retention.hours, default.replication.factor, or num.partitions parameter, automatically created topics later use the new value. For example, if num.partitions is set to 5, an automatically created topic will have the following settings: 5 partitions, 3 replicas, aging time 72 hours, and synchronous replication and flushing disabled.

Synchronous replication

If a topic has only one replica, synchronous replication cannot be enabled.

Replica quantity

Single-replica topics are not recommended. If an instance node is faulty, an internal service error may be reported when you query messages in a topic with only one replica. Therefore, you are not advised to use a topic with only one replica.

Aging time

The value of the log.retention.hours parameter takes effect only if the aging time has not been set for the topic.

For example, if the aging time of Topic01 is set to 60 hours and log.retention.hours is set to 72 hours, the actual aging time of Topic01 is 60 hours.

Batch importing and exporting topics

Batch export is supported, but batch import is not supported.

Topic name

If a topic name starts with a special character, for example, an underscore (_) or a number sign (#), monitoring data cannot be displayed.

Delay queues

Not supported

Broker faults

When some brokers of an instance are faulty, topics cannot be created, modified, or deleted, but can be queried.

Consumer Group

Table 3 Consumer group notes and constraints

Item

Notes and Constraints

Creating consumer groups, consumers, and producers

Consumer groups, consumers, and producers are generated automatically when you use the instance.

Resetting the consumer offset

Messages may be retrieved more than once after the offset is reset.

Consumer group name

If a consumer group name starts with a special character, for example, an underscore (_) or a number sign (#), monitoring data cannot be displayed.

Broker faults

When some instance brokers are faulty, consumer groups cannot be created or deleted, or consumption progress cannot be reset, but consumer groups can be queried.

Message

Table 4 Message notes and constraints

Item

Notes and Constraints

Message size

The maximum length of a message is 10 MB. If the length exceeds 10 MB, the production fails.

User

Table 5 User notes and constraints

Item

Notes and Constraints

Number of users

A maximum of 20 SASL_SSL users can be created for a Kafka instance.

Broker faults

When some instance brokers are faulty, users cannot be created or deleted, or password cannot be reset, but users can be queried.