Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Situation Awareness
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
Software Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive

Cluster Kafka Instances

Updated on 2024-11-11 GMT+08:00

Instance Specifications

A cluster Kafka instance has three or more brokers, and is compatible with open-source Kafka 1.1.0, 2.7, and 3.x.

NOTE:

For Kafka instances, the number of transactions per second (TPS) is the maximum number of messages that can be written per second. In the following table, transactions per second (TPS) are calculated assuming that the size of a message is 1 KB. The test scenario is private access in plaintext. The disk type is ultra-high I/O. For more information about TPS performance, see Kafka Instance TPS.

Table 1 Cluster Kafka instance specifications

Flavor

Brokers

Maximum TPS per Broker

Maximum Partitions per Broker

Recommended Consumer Groups per Broker

Maximum Client Connections per Broker

Storage Space (GB)

Traffic per Broker (MB/s)

kafka.2u4g.cluster.small

3–30

20,000

100

15

2000

300–300,000

40

kafka.2u4g.cluster

3–30

30,000

250

20

2000

300–300,000

100

kafka.4u8g.cluster

3–30

100,000

500

100

4000

300–600,000

200

kafka.8u16g.cluster

3–50

150,000

1000

150

4000

300–1,500,000

375

kafka.12u24g.cluster

3–50

200,000

1500

200

4000

300–1,500,000

625

kafka.16u32g.cluster

3–50

250,000

2000

200

4000

300–1,500,000

750

Instance Specifications and Network Bandwidth

The network bandwidth of a Kafka instance consists of the following:

  1. Network bandwidth used by the instance brokers
  2. Bandwidth of the disk used by the instance brokers. For details, see Disk Types and Performance.

Note:

  • By default, Kafka tests are performed in the tail read scenario (that is, only the latest production data is consumed) instead of the cold read scenario (that is, historical data is consumed from the beginning).
  • The bandwidth of an instance with an old flavor (such as 100 MB/s) is the total network bandwidth of the instance's all brokers.

Traffic calculation of instances with new flavors (such as kafka.2u4g.cluster) is described as follows:

  • The read/write ratio is 1:1.
  • The default number of topic replicas is 3.
  • Total network traffic = Traffic per broker x Broker quantity
  • Total instance traffic = Service traffic + Data replication traffic between brokers

Assume that the current flavor is kafka.2u4g.cluster, the traffic per broker is 100 MB/s, and the number of brokers is 3. What are the total network traffic, maximum read traffic, and maximum write traffic of the instance?

  1. Total network traffic = Traffic per broker x Broker quantity = 100 MB/s x 3 = 300 MB/s
  2. Maximum read traffic = Total instance network traffic/Default number of replicas/2 = 300 MB/s/3/2= 50 MB/s
  3. Maximum write traffic = Total instance network traffic/Default number of replicas/2 = 300 MB/s/3/2 = 50 MB/s

Mapping Between Old and New Flavors

Table 2 compares the old and new Kafka instance flavors.

Table 2 Mapping between old and new Kafka instance flavors

Old Flavor

New Flavor

Flavor

Total Instance Network Traffic

Flavor

Total Instance Network Traffic

100 MB/s

100 MB/s

kafka.2u4g.cluster.small * 3

120 MB/s

300 MB/s

300 MB/s

kafka.2u4g.cluster * 3

300 MB/s

600 MB/s

600 MB/s

kafka.4u8g.cluster * 3

600 MB/s

1200 MB/s

1200 MB/s

kafka.4u8g.cluster * 6

1250 MB/s

Instances with new flavors have the following features:

  • Better performance and cost effectiveness: They use exclusive resources (except for kafka.2u4g.cluster.small). By contrast, old flavors use non-exclusive resources. If the load is heavy, resources conflicts will occur.
  • Latest functions, for example, reassigning partitions, changing the SSL setting, and viewing rebalancing logs.
  • Flexible flavor changes: For example, you can increase or decrease the broker flavor.
  • Flexible disk capacity: Only related to the broker quantity, and not to the flavor.
  • More specification options: A wider range of combinations of broker flavor (over 10,000 MB/s) and quantity are available.
  • More disk type options: General Purpose SSD and Extreme SSD are now available, in addition to the original disk types.

Flavor Selection

  • kafka.2u4g.cluster.small with 3 brokers

    Recommended for up to 6000 client connections, 45 consumer groups, and 60,000 TPS

  • kafka.2u4g.cluster with 3 brokers

    Recommended for up to 6000 client connections, 60 consumer groups, and 90,000 TPS

  • kafka.4u8g.cluster with 3 brokers

    Recommended for up to 12,000 client connections, 300 consumer groups, and 300,000 TPS

  • kafka.8u16g.cluster with 3 brokers

    Recommended for up to 12,000 client connections, 450 consumer groups, and 450,000 TPS

  • kafka.12u24g.cluster with 3 brokers

    Recommended for up to 12,000 client connections, 600 consumer groups, and 600,000 TPS

  • kafka.16u32g.cluster with 3 brokers

    Recommended for up to 12,000 client connections, 600 consumer groups, and 750,000 TPS

Storage Space Selection

Kafka instances can store messages in multiple replicas. The storage space is consumed by message replicas, logs, and metadata. When creating an instance, specify its storage space based on the expected service message size, the number of replicas, and reserved disk space. Each Kafka broker reserves 33 GB disk space for storing logs and metadata.

For example, if the expected service message size is 100 GB, the number of replicas is 2, and the number of brokers is 3, the disk size should be at least 299 GB (100 GB x 2 + 33 GB x 3).

The storage space can be expanded as your service grows.

Topic Quantity

There are limits on the topic quantity and the aggregate number of partitions in the topics. When the partition quantity limit is reached, you can no longer create topics.

The number of topics is related to the maximum number of partitions allowed (see Figure 1) and the specified number of partitions in each topic (see Table 1).

Figure 1 Setting the number of partitions

The maximum number of partitions allowed for an instance with kafka.2u4g.cluster and 3 brokers is 750.

  • If the number of partitions of each topic in the instance is 3, the maximum number of topics is 750/3 = 250.
  • If the number of partitions of each topic in the instance is 1, the maximum number of topics is 750/1 = 750.

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback