Help Center/ MapReduce Service/ Component Operation Guide (LTS)/ Using Kafka/ Using Kafka to Produce Consumption Data
Updated on 2024-10-09 GMT+08:00

Using Kafka to Produce Consumption Data

Scenario

You can use the MRS cluster client to create, query, and delete Kafka topics. You can also log in to the Kafka UI to view the consumption information of the current cluster.

Prerequisites

  • The client has been installed in a directory, for example, /opt/client. The client directory in the following operations is only an example. Change it based on site requirements.
  • If you use KafkaUI to perform operations, you have created a user who has the permission to access the KafkaUI page. If you need to perform operations on the KafkaUI page, for example, create topics, you need to grant related permissions to the user. For details, see Kafka User Permissions.

    If you access Manager and KafkaUI for the first time, you need to add a site trust in the browser to continue accessing KafkaUI.

Using the Kafka Client to Produce Consumption Data

  1. Install the client. For details, see Installing a Client.
  2. Access the ZooKeeper instance page.

    Log in to FusionInsight Manager. For details, see Accessing FusionInsight Manager. Choose Cluster > Services > ZooKeeper > Instance.

  3. View the IP addresses of the ZooKeeper role instance.

    Record any IP address of the ZooKeeper instance.

  4. Log in to the node where the client is installed.
  5. Run the following command to switch to the client installation directory, for example, /opt/client/Kafka/kafka/bin.

    cd /opt/client/Kafka/kafka/bin

  6. Run the following command to configure environment variables:

    source /opt/client/bigdata_env

  7. If Kerberos authentication is enabled for the current cluster, run the following command to authenticate the current user. If Kerberos authentication is disabled for the current cluster, skip this step.

    kinit Kafka user

  8. Log in to FusionInsight Manager, choose Cluster > Services > ZooKeeper, and click the Configurations tab and then All Configurations. On the displayed page, search for the clientPort parameter and record its value.
  9. Create a topic.

    sh kafka-topics.sh --create --topic Topic name --partitions Number of partitions occupied by the topic --replication-factor Number of replicas of the topic --zookeeper IP address of the node where the ZooKeeper instance resides:clientPort/kafka

    Example: sh kafka-topics.sh --create --topic TopicTest --partitions 3 --replication-factor 3 --zookeeper 10.10.10.100:2181/kafka

    You can search for clientPort in all ZooKeeper configuration parameters to obtain the value of clientPort. The default ports are as follows:
    • The default open-source port number is 2181.
    • The default customized port number is 24002.

    Port customization/open source: When creating an LTS version cluster, you can set Component Port to Open source or Custom. If Open source is selected, the open source port is used. If Custom is selected, the customized port is used.

  10. Run the following command to view the topic information in the cluster:

    sh kafka-topics.sh --list --zookeeper IP address of the node where the ZooKeeper instance resides:clientPort/kafka

    Example: sh kafka-topics.sh --list --zookeeper 10.10.10.100:2181/kafka

  11. Delete the topic created in 9.

    sh kafka-topics.sh --delete --topic Topic name --zookeeper IP address of the node where the ZooKeeper instance resides:clientPort/kafka

    Example: sh kafka-topics.sh --delete --topic TopicTest --zookeeper 10.10.10.100:2181/kafka

Using KafkaUI to View Consumption Information

  1. Access the Kafka UI.

    1. Log in to FusionInsight Manager as a user who has the permission to access the Kafka UI and choose Cluster > Services > Kafka.

      If you need to perform related operations on the page, for example, creating a topic, you need to grant related permissions to the user. For details, see Kafka User Permissions.

    2. On the right of KafkaManager WebUI, click the URL to access Kafka UI.

  2. In the Cluster Summary area, view the number of existing topics, brokers, and consumer groups in the current cluster.

  3. You can click the number under Brokers, Topics, or Consumer Group to go to the corresponding page and view and perform operations on the information.
  4. In the Cluster Action area, you can create topics and migrate partitions. For details, see sections Creating a Kafka Topic and Migrating Data Between Kafka Nodes.
  5. In the Topic Rank column, view top 10 topics by the number of topic logs, data volume, incoming data volume, and outgoing data volume in the current cluster.

  6. Click a topic name in the TopicName column to go to the topic details page. For details about operations on the page, see Viewing Kafka Data Production and Consumption Details.