Updated on 2024-06-04 GMT+08:00

Buying a Kafka Instance

Scenario

Your Kafka instance will be exclusive and deployed in physical isolation. You can customize the computing capabilities and storage space of an instance based on service requirements.

Prerequisites

Before creating a Kafka instance, prepare the resources listed in Table 1.

Table 1 Kafka resources

Resource

Requirement

Operations

VPC and subnet

Configure the VPC and subnet for Kafka instances as required. You can use the current account's existing VPC and subnet or shared ones, or create new ones.

VPC owners can share the subnets in a VPC with one or multiple accounts through Resource Access Manager (RAM). Through VPC sharing, you can easily configure, operate, and manage multiple accounts' resources at low costs. For more information about VPC and subnet sharing, see VPC Sharing.

Note: VPCs must be created in the same region as the Kafka instance.

For details on how to create a VPC and a subnet, see Creating a VPC. If you need to create and use a new subnet in an existing VPC, see Creating a Subnet for the VPC.

Security group

Different Kafka instances can use the same or different security groups.

For details on how to create a security group, see Creating a Security Group. For details on how to add rules to a security group, see Adding a Security Group Rule.

EIP

Note the following when creating EIPs:

  • The EIPs must be created in the same region as the Kafka instance.
  • The number of EIPs must be the same as the number of Kafka instance brokers.
  • The Kafka console cannot identify IPv6 EIPs.

For details about how to create an EIP, see Assigning an EIP.

Buying a Kafka Instance

  1. Go to the Buy Instance page.
  2. Specify Billing Mode, Region, Project, and AZ.
  3. Enter an instance name and select an enterprise project.
  4. Configure the following instance parameters:

    Specifications: Select Cluster or Custom. Alternatively, select Single-node.

    • If you select Cluster, specify the version, broker flavor and quantity, disk type, and storage space to be supported by the Kafka instance based on site requirements. Cluster instances support Kafka versions 1.1.0, 2.7, and 3.x.
    • If you select Custom, the system calculates the broker quantity and storage space for different flavors based on your specified parameters (creation traffic peak, retrieval traffic, number of replicas per topic, total number of partitions, and size of messages created during the retention period). You can select one of the recommended flavors as required.
    • If you select Single-node, a v2.7 single-broker instance will be created. For more information about single-node instances, see Comparing Single-node and Cluster Kafka Instances.

    If you select Cluster, specify the version, broker flavor and quantity, disk type, and storage space to be supported by the Kafka instance based on site requirements.

    1. Version: Kafka v1.1.0, v3.x, and v2.7 are supported. The version cannot be changed once the instance is created.
    2. CPU Architecture: The x86 architecture is supported.
    3. Broker Flavor: Select broker specifications that best fit your needs.

      Maximum number of partitions per broker x Number of brokers = Maximum number of partitions of an instance. If the total number of partitions of all topics exceeds the upper limit of partitions, topic creation fails.

    4. For Brokers, specify the broker quantity.
    5. Storage space per broker: Disk type and total disk space for storing the instance data. The disk type cannot be changed once the instance is created.

      The storage space is consumed by message replicas, logs, and metadata. Specify the storage space based on the expected service message size, the number of replicas, and the reserved disk space. Each Kafka broker reserves 33 GB disk space for storing logs and metadata.

      Disks are formatted when an instance is created. As a result, the actual available disk space is 93% to 95% of the total disk space.

    6. Capacity Threshold Policy: policy used when the disk usage reaches the threshold. The capacity threshold is 95%.
      • Automatically delete: Messages can be created and retrieved, but 10% of the earliest messages will be deleted to ensure sufficient disk space. This policy is suitable for scenarios where no service interruption can be tolerated. Data may be lost.
      • Stop production: New messages cannot be created, but existing messages can still be retrieved. This policy is suitable for scenarios where no data loss can be tolerated.
    Figure 1 Default specifications

    If you select Custom, the system calculates the number of brokers and broker storage space for different flavors based on your specified peak creation traffic, retrieval traffic, number of replicas per topic, total number of partitions, and size of messages created during the retention period. You can select one of the recommended flavors as required. This option is not available for v3.x.

    Figure 2 Specification calculation

    If you select Single-node, a v2.7 instance with one broker will be created.

    1. Version: Kafka version, which can only be 2.7.
    2. CPU Architecture: The x86 architecture is supported.
    3. Broker Flavor: Select broker specifications that best fit your needs.
    4. Brokers: The instance can have only one broker.
    5. Storage space per broker: Select the desired disk type for storing Kafka data. The disk space is 100 GB and cannot be changed.

      The disk type cannot be changed once the instance is created.

      Disks are formatted when an instance is created. As a result, the actual available disk space is 93% to 95% of the total disk space.

    6. Capacity Threshold Policy: policy used when the disk usage reaches the threshold. The capacity threshold is 95%.
      • Automatically delete: Messages can be created and retrieved, but 10% of the earliest messages will be deleted to ensure sufficient disk space. This policy is suitable for scenarios where no service interruption can be tolerated. Data may be lost.
      • Stop production: New messages cannot be created, but existing messages can still be retrieved. This policy is suitable for scenarios where no data loss can be tolerated.

  5. Configure the instance network parameters.

    • Select a VPC and a subnet.

      A VPC provides an isolated virtual network for your Kafka instances. You can configure and manage the network as required.

      After the Kafka instance is created, its VPC and subnet cannot be changed.

    • For Private IP Addresses, select Auto or Manual.
      • Auto: The system automatically assigns an IP address from the subnet.
      • Manual: Select IP addresses from the drop-down list.

      In regions except for the following ones, Private IP Addresses has been moved to the Private Network Access area. For details, see 6.

      • CN North-Beijing1
      • ME-Riyadh
      • LA-Sao Paulo1
      • LA-Santiago
    • Select a security group.

      A security group is a set of rules for accessing a Kafka instance. You can click Manage Security Group to view or create security groups on the network console.

  6. Configure the instance access mode.

    Table 2 Instance access modes

    Public or Private Network

    Plaintext or Ciphertext

    Description

    Private Network Access

    Plaintext Access

    Clients connect to the Kafka instance without SASL authentication.

    Once enabled, private network access cannot be disabled. Enable plaintext or ciphertext access, or both.

    Ciphertext Access

    Clients connect to the Kafka instance with SASL authentication.

    Once enabled, private network access cannot be disabled. Enable plaintext or ciphertext access, or both. To disable ciphertext access, contact customer service.

    If you enable Ciphertext Access, specify a security protocol, SASL/PLAIN, username, and password.

    After an instance is created, disabling and re-enabling Ciphertext Access do not affect users.

    Private IP Addresses

    Select Auto or Manual.

    • Auto: The system automatically assigns an IP address from the subnet.
    • Manual: Select IP addresses from the drop-down list. If the number of selected IP addresses is less than the number of brokers, the remaining IP addresses will be automatically assigned.

    Public Network Access

    Plaintext Access

    Clients connect to the Kafka instance without SASL authentication.

    Enable or disable plaintext access, and configure addresses for public network access.

    Ciphertext Access

    Clients connect to the Kafka instance with SASL authentication.

    Enable or disable ciphertext access, and configure addresses for public network access.

    If you enable Ciphertext Access, specify a security protocol, SASL/PLAIN, username, and password.

    After an instance is created, disabling and re-enabling Ciphertext Access do not affect users.

    Public IP Addresses

    Select the number of public IP addresses as required.

    If EIPs are insufficient, click Create Elastic IP to create EIPs. Then, return to the Kafka console and click next to Public IP Address to refresh the public IP address list.

    Kafka instances only support IPv4 EIPs.

    Ciphertext access is unavailable for single-node instances.

    The security protocol, SASL/PLAIN mechanism, username, and password are described as follows.

    Table 3 Ciphertext access parameters

    Parameter

    Value

    Description

    Security Protocol

    SASL_SSL

    SASL is used for authentication. Data is encrypted with SSL certificates for high-security transmission.

    This protocol supports the SCRAM-SHA-512 and PLAIN mechanisms.

    What are SCRAM-SHA-512 and PLAIN mechanisms?

    • SCRAM-SHA-512: uses the hash algorithm to generate credentials for usernames and passwords to verify identities. SCRAM-SHA-512 is more secure than PLAIN.
    • PLAIN: a simple username and password verification mechanism.

    SASL_PLAINTEXT

    SASL is used for authentication. Data is transmitted in plaintext for high performance.

    This protocol supports the SCRAM-SHA-512 and PLAIN mechanisms.

    SCRAM-SHA-512 authentication is recommended for plaintext transmission.

    SASL/PLAIN

    -

    • If SASL/PLAIN is disabled, the SCRAM-SHA-512 mechanism is used for username and password authentication.
    • If SASL/PLAIN is enabled, both the SCRAM-SHA-512 and PLAIN mechanisms are supported. You can select either of them as required.

    The SASL/PLAIN setting cannot be changed once ciphertext access is enabled.

    Username and Password

    -

    Username and password used by the client to connect to the Kafka instance.

    The username cannot be changed once ciphertext access is enabled.

    Instance access mode parameters are not available in the following regions.

    • CN North-Beijing1
    • ME-Riyadh
    • LA-Sao Paulo1
    • LA-Santiago

  7. Configure Kafka SASL_SSL.

    This parameter indicates whether to enable SASL authentication when a client connects to the instance. If you enable Kafka SASL_SSL, data will be encrypted for transmission to enhance security.

    This setting is enabled by default. It cannot be changed after the instance is created. If you want to use a different setting, you must create a new instance.

    After Kafka SASL_SSL is enabled, you can determine whether to enable SASL/PLAIN. If SASL/PLAIN is disabled, the SCRAM-SHA-512 mechanism is used to transmit data. If SASL/PLAIN is enabled, both the SCRAM-SHA-512 and PLAIN mechanisms are supported. You can select either of them as required. The SASL/PLAIN setting cannot be changed once the instance is created.

    What are SCRAM-SHA-512 and PLAIN mechanisms?

    • SCRAM-SHA-512: uses the hash algorithm to generate credentials for usernames and passwords to verify identities. SCRAM-SHA-512 is more secure than PLAIN.
    • PLAIN: a simple username and password verification mechanism.

    If you enable Kafka SASL_SSL, you must also set the username and password for accessing the instance.

    • In regions except for the following ones, Kafka SASL_SSL has been moved to the Private Network Access and Public Network Access areas. For details, see 6.
      • CN North-Beijing1
      • ME-Riyadh
      • LA-Sao Paulo1
      • LA-Santiago
    • Single-node instances do not have this parameter.

  8. Specify the required duration.

    This parameter is displayed only if the billing mode is yearly/monthly.

  9. Click Advanced Settings to configure more parameters.

    1. Configure public access.

      Public access is disabled by default. You can enable or disable it as required.

      After public access is enabled, configure an IPv4 EIP for each broker.

      After enabling Public Access, you can enable or disable Intra-VPC Plaintext Access. If it is enabled, data will be transmitted in plaintext when you connect to the instance through a private network, regardless of whether SASL_SSL is enabled. This setting cannot be changed after the instance is created. Exercise caution. If you want to use a different setting, you must create a new instance.

      Public Network Access is no longer under Advanced Settings in regions except for the following ones. For details, see 6.

      • CN North-Beijing1
      • ME-Riyadh
      • LA-Sao Paulo1
      • LA-Santiago
    2. Configure Smart Connect.

      Smart Connect is used for data synchronization between heterogeneous systems. You can configure Smart Connect tasks to synchronize data between Kafka and another cloud service or between two Kafka instances.

      Single-node instances do not have this parameter.

    3. Configure Automatic Topic Creation.

      This setting is disabled by default. You can enable or disable it as required.

      If this option is enabled, a topic will be automatically created when a message is produced in or consumed from a topic that does not exist. By default, the topic has the following parameters:

      • Partitions: 3
      • Replicas: 3
      • Aging Time: 72
      • Synchronous Replication and Synchronous Flushing disabled
      • Message Timestamp: CreateTime
      • Max.Message Size (bytes): 10,485,760

      After you change the value of the log.retention.hours, default.replication.factor, or num.partitions parameter, the value will be used in later topics that are automatically created.

      For example, assume that num.partitions is changed to 5, an automatically created topic has the following parameters:

      • Partitions: 5
      • Replicas: 3
      • Aging Time: 72
      • Synchronous Replication and Synchronous Flushing disabled
      • Message Timestamp: CreateTime
      • Max.Message Size (bytes): 10,485,760
    4. Specify Tags.

      Tags are used to identify cloud resources. When you have multiple cloud resources of the same type, you can use tags to classify them based on usage, owner, or environment.

      If your organization has configured tag policies for DMS for Kafka, add tags to Kafka instances based on the policies. If a tag does not comply with the policies, Kafka instance creation may fail. Contact your organization administrator to learn more about tag policies.

      • If you have predefined tags, select a predefined pair of tag key and value. You can click View predefined tags to go to the Tag Management Service (TMS) console and view or create tags.
      • You can also create new tags by specifying Tag key and Tag value.

      Up to 20 tags can be added to each Kafka instance. For details about the requirements on tags, see Configuring Kafka Instance Tags.

    5. Enter a description of the instance.

  10. Click Buy.
  11. Confirm the instance information, and read and agree to the HUAWEI CLOUD Customer Agreement. If you have selected the yearly/monthly billing mode, click Pay Now and make the payment as prompted. If you have selected the pay-per-use mode, click Submit.
  12. Return to the instance list and check whether the Kafka instance has been created.

    It takes 3 to 15 minutes to create an instance. During this period, the instance status is Creating.

    • If the instance is created successfully, its status changes to Running.
    • If the instance is in the Creation failed state, delete it by referring to Deleting Kafka Instances. Then create a new one. If the instance creation fails again, contact customer service.

      Instances that fail to be created do not occupy other resources.