Help Center/ MapReduce Service/ Getting Started/ Creating and Using a Kafka Cluster for Stream Processing
Updated on 2024-08-28 GMT+08:00

Creating and Using a Kafka Cluster for Stream Processing

Scenario

This topic helps you create a stream analysis cluster from scratch and generate and consume messages in a Kafka topic.

A Kafka cluster provides a message system with high throughput and scalability. It is widely used for log collection and monitoring data aggregation. Kafka is efficient in streaming data ingestion and real-time data processing and storage.

Procedure

Before you start, complete operations described in Preparations. Then, follow these steps:

  1. Creating an MRS Cluster: Create a real-time analysis cluster of MRS 3.2.0-LTS.1.
  2. Installing the Cluster Client: Download and install the MRS cluster client.
  3. Using the Kafka Client to Create a Topic: Create a topic on the Kafka client.
  4. Managing Messages in a Kafka Topic: Consume messages in a created topic on the Kafka client.

Preparations

Video Tutorial

This video uses an MRS 3.1.0 cluster (with Kerberos authentication disabled) as an example to describe how to use a Kafka client to create, query, and delete a topic. For details about how to create a topic, see Creating a Topic Using the Kafka Client.

The UI may vary depending on the version. The video tutorial is for reference only.

Step 1: Creating an MRS Cluster

  1. Go to the Buy Cluster page.
  2. Search for MapReduce Service in the service list and enter the MRS console.
  3. Click Buy Cluster. The Quick Config tab is displayed.
  4. Configure the cluster as you need. In this example, a pay-per-use MRS 3.2.0-LTS.1 cluster will be created. For more details about how to configure the parameters, see Quickly Creating a Cluster.

    Table 1 MRS cluster parameters

    Parameter

    Description

    Example Value

    Billing Mode

    Billing mode of the cluster you want to create. MRS provides two billing modes: yearly/monthly and pay-per-use.

    Pay-per-use is a postpaid billing mode. You pay as you go and pay for what you use. The cluster usage is calculated by the second but billed every hour.

    Pay-per-use

    Region

    Region where the MRS resources to be requested belong.

    MRS clusters in different regions cannot communicate with each other over an intranet. For lower network latency and quick resource access, select the nearest region.

    CN-Hong Kong

    Cluster Name

    Name of the MRS cluster you want to create.

    mrs_demo

    Cluster Type

    A range of clusters that accommodate diverse big data demands. You can select a Custom cluster to run a wide range of analytics components supported by MRS.

    Custom

    Version Type

    Version of the MRS cluster. Supported open-source components and their functions vary depending on the cluster version. You are advised to select the latest version.

    LTS

    Cluster Version

    Service type of the MRS

    MRS 3.2.0-LTS.1

    Component

    Cluster templates containing preset opensource components you will need for your business.

    Real-time analysis cluster

    AZ

    Available AZ associated with the cluster region.

    AZ1

    VPC

    VPC where you want to create the cluster. You can click View VPC to view the name and ID. If no VPC is available, create one.

    vpc-default

    Subnet

    Subnet where your cluster belongs. You can access the VPC management console to view the names and IDs of existing subnets in the VPC. If no subnet is created under the VPC, click Create Subnet to create one.

    subnet-default

    Cluster Node

    Cluster node details.

    Default value

    Kerberos Authentication

    Whether Kerberos authentication is enabled.

    Disabled

    Username

    Username for logging in to the cluster management page and the ECS node.

    admin/root

    Password

    User password for logging in to the cluster management page and the ECS node.

    -

    Confirm Password

    Enter the user password again.

    -

    Enterprise Project

    Enterprise project to which the cluster belongs.

    default

    Secure Communications

    Select the check box to agree to use the access control rules.

    Selected

    Figure 1 Purchasing a real-time analysis cluster

  5. Click Buy Now. A page is displayed showing that the task has been submitted.
  6. Click Back to Cluster List. You can view the status of the newly created cluster on the Active Clusters page.

    Wait for the cluster creation to complete. The initial status of the cluster is Starting. After the cluster is created, the cluster status becomes Running.

Step 2: Installing the Cluster Client

You need to install a cluster client to connect to component services in the cluster and submit jobs.

You can install the client on a node in or outside the cluster. This topic installs the client on the Master1 node as an example.

  1. Click the MRS cluster name in the cluster list to go to the dashboard page.
  2. Click Access Manager next to MRS Manager. In the displayed dialog box, select EIP and configure the EIP information.

    For the first access, click Manage EIPs to purchase an EIP on the EIP console. Go back to the Access MRS Manager dialog box, refresh the EIP list, and select the EIP.

  3. Select the confirmation check box and click OK to log in to the FusionInsight Manager of the cluster.

    The username for logging in to FusionInsight Manager is admin, and the password is the one configured during cluster purchase.

  4. On the displayed Homepage page, click next to the cluster name and click Download Client to download the cluster client.

    Figure 2 Downloading the client

    In the Download Cluster Client dialog box, set the following parameters:

    • Set Select Client Type to Complete Client.
    • Retain the default value for Platform Type, for example, x86_64.
    • Retain the default path for Save to Path. The generated file will be saved in the /tmp/FusionInsight-Client directory on the active OMS node of the cluster.
    Figure 3 Downloading the cluster client

    Click OK and wait until the client software is generated.

  5. Go back to the MRS console and click the cluster name in the cluster list. Go to the Nodes tab, click the name of the node that contains master1. In the upper right corner of the ECS details page, click Remote Login to log in to the Master1 node.

    Figure 4 Checking the Master1 node

  6. Log in to the Master1 node as user root. The password is the one you set for the root user during cluster purchase.
  7. Switch to the directory where the client software package is stored and decompress the package.

    cd /tmp/FusionInsight-Client/

    tar -xvf FusionInsight_Cluster_1_Services_Client.tar

    tar -xvf FusionInsight_Cluster_1_Services_ClientConfig.tar

  8. Go to the directory where the installation package is stored and install the client.

    cd FusionInsight_Cluster_1_Services_ClientConfig

    Install the client to a specified directory (an absolute path), for example, /opt/client.

    ./install.sh /opt/client

    ...
    ... component client is installed successfully
    ...

    A client installation directory will be automatically created if it does not exist. If there is such directory, it must be empty. The specified client installation directory can contain only uppercase letters, lowercase letters, digits, and underscores (_), and cannot contain space.

Step 3: Using the Kafka Client to Create a Topic

  1. In the cluster list, click the name of the target cluster. The dashboard tab is displayed.
  2. On the displayed page, click Synchronize next to IAM User Sync. In the displayed dialog box, select All, and click Synchronize. Wait until the synchronization task is complete.
  3. Go to the Components tab, click ZooKeeper, and then click the Instances tab. Check and record the IP address of a ZooKeeper quorumpeer role instance.

    Figure 5 Checking IP addresses of ZooKeeper role instances

  4. Click Service Configuration and check the value of clientPort, which indicates the ZooKeeper client connection port.
  5. Click Service ZooKeeper to return to the component list.

    Figure 6 Back to component list

  6. Click Kafka, and then the Instances tab. Check and record the IP addresses of a Kafka Broker instance.

    Figure 7 Checking the IP address of a broker instance

  7. Click Service Configuration and check the value of port, which indicates the port for connecting to Kafka Broker.
  8. Log in to the node (Master1) where the MRS client is located as user root.
  9. Switch to the client installation directory and configure environment variables.

    cd /opt/client

    source bigdata_env

  10. Create a Kafka topic.

    kafka-topics.sh --create --zookeeper IP address of ZooKeeper role instance:ZooKeeper client connection port /kafka --partitions 2 --replication-factor 2 --topic Topic name

    The following is an example:

    kafka-topics.sh --create --zookeeper 192.168.21.234:2181/kafka --partitions 2 --replication-factor 2 --topic Topic1

    If the following information is displayed, the topic is created:

    Created topic Topic1.

Step 4: Managing Messages in the Kafka Topic

  1. Log in to the node (Master1) where the MRS client is deployed as user root.
  2. Switch to the client installation directory and configure environment variables.

    cd /opt/client

    source bigdata_env

  3. Generate a message in Topic1.

    kafka-console-producer.sh --broker-list IP address of the node where the Kafka Broker role is deployed:Broker connection port --topic Topic name --producer.config /opt/hadoopclient/Kafka/kafka/config/producer.properties

    For the IP address and port number of the node where the Kafka Broker instance is deployed, see 6 and 7 in Step 3: Using the Kafka Client to Create a Topic.

    The following is an example:

    kafka-console-producer.sh --broker-list 192.168.21.21:9092 --topic Topic1 --producer.config /opt/client/Kafka/kafka/config/producer.properties

  4. Open a new client connection window.

    cd /opt/client

    source bigdata_env

  5. Consume messages in Topic1.

    kafka-console-consumer.sh --topic Topic name --bootstrap-server IP address of the node where the Kafka Broker role is deployed:Broker connection port --consumer.config /opt/client/Kafka/kafka/config/consumer.properties

    The following is an example:

    kafka-console-consumer.sh --topic Topic1 --bootstrap-server 192.168.21.21:9092 --consumer.config /opt/client/Kafka/kafka/config/consumer.properties

  6. Enter some content in the command line that generates messages in 3. The content is used as the messages generated by the producer. Press Enter to send the message.

    The following is an example:

    >aaa
    >bbb
    >ccc

    To stop generating messages, press Ctrl+C to exit.

  7. In the message consuming window of 5, check whether the messages are consumed.

    aaa
    bbb
    ccc

Related Information

For information about Kafka permission management, topic management and message consumption, HA configuration, and data balancing, see Using Kafka.