Esta página ainda não está disponível no idioma selecionado. Estamos trabalhando para adicionar mais opções de idiomas. Agradecemos sua compreensão.

Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive
Help Center/ MapReduce Service/ Getting Started/ Creating and Using a Kafka Cluster for Stream Processing

Creating and Using a Kafka Cluster for Stream Processing

Updated on 2025-01-23 GMT+08:00

Scenario

This topic helps you create a stream analysis cluster from scratch and generate and consume messages in a Kafka topic.

A Kafka cluster provides a message system with high throughput and scalability. It is widely used for log collection and monitoring data aggregation. Kafka is efficient in streaming data ingestion and real-time data processing and storage.

Procedure

Before you start, complete operations described in Preparations. Then, follow these steps:

  1. Creating an MRS Cluster: Create a real-time analysis cluster of MRS 3.2.0-LTS.1.
  2. Installing the Cluster Client: Download and install the MRS cluster client.
  3. Using the Kafka Client to Create a Topic: Create a topic on the Kafka client.
  4. Managing Messages in a Kafka Topic: Consume messages in a created topic on the Kafka client.
  5. Releasing resources: To avoid additional expenditures, release resources promptly if you no longer need them.

Preparations

  • You have prepared an IAM user who has the permission to create MRS clusters. For details, see Creating an MRS User.

Video Tutorial

This video uses an MRS 3.1.0 cluster (with Kerberos authentication disabled) as an example to describe how to use a Kafka client to create, query, and delete a topic. For details about how to create a topic, see Creating a Topic Using the Kafka Client.

NOTE:

The UI may vary depending on the version. The video tutorial is for reference only.

Step 1: Creating an MRS Cluster

  1. Go to the Buy Cluster page.
  2. Search for MapReduce Service in the service list and enter the MRS console.
  3. Click Buy Cluster. The Quick Config tab is displayed.
  4. Configure the cluster as you need. In this example, a pay-per-use MRS 3.2.0-LTS.1 cluster will be created. For more details about how to configure the parameters, see Quickly Creating a Cluster.

    Table 1 MRS cluster parameters

    Parameter

    Example Value

    Description

    Billing Mode

    Pay-per-use

    Billing mode of the cluster you want to create. MRS provides two billing modes: yearly/monthly and pay-per-use.

    Pay-per-use is a postpaid billing mode. You pay as you go and pay for what you use. The cluster usage is calculated by the second but billed every hour.

    Region

    CN-Hong Kong

    Region where the MRS resources to be requested belong.

    MRS clusters in different regions cannot communicate with each other over an intranet. For lower network latency and quick resource access, select the nearest region.

    Cluster Name

    mrs_demo

    Name of the MRS cluster you want to create.

    Cluster Type

    Custom

    A range of clusters that accommodate diverse big data demands. You can select a Custom cluster to run a wide range of analytics components supported by MRS.

    Version Type

    LTS

    Version of the MRS cluster. Supported open-source components and their functions vary depending on the cluster version. You are advised to select the latest version.

    Cluster Version

    MRS 3.2.0-LTS.1

    Service type of the MRS

    Component

    Real-time Analysis Cluster

    Cluster templates containing preset opensource components you will need for your business.

    AZ

    AZ 1

    Available AZ associated with the cluster region.

    VPC

    vpc-default

    VPC where you want to create the cluster. You can click View VPC to view the name and ID. If no VPC is available, create one.

    Subnet

    subnet-default

    Subnet where your cluster belongs. You can access the VPC management console to view the names and IDs of existing subnets in the VPC. If no subnet is created under the VPC, click Create Subnet to create one.

    Cluster Node

    Default value

    Cluster node details.

    Kerberos Authentication

    Disabled

    Whether Kerberos authentication is enabled.

    Username

    admin/root

    Username for logging in to the cluster management page and the ECS node.

    Password

    -

    User password for logging in to the cluster management page and the ECS node.

    Confirm Password

    -

    Enter the user password again.

    Enterprise Project

    default

    Enterprise project to which the cluster belongs.

    Secure Communications

    Selected

    Select the check box to agree to use the access control rules.

    Figure 1 Purchasing a real-time analysis cluster

  5. Click Buy Now. A page is displayed showing that the task has been submitted.
  6. Click Back to Cluster List. You can view the status of the newly created cluster on the Active Clusters page.

    Wait for the cluster creation to complete. The initial status of the cluster is Starting. After the cluster is created, the cluster status becomes Running.

Step 2: Installing the Cluster Client

You need to install a cluster client to connect to component services in the cluster and submit jobs.

You can install the client on a node in or outside the cluster. This topic installs the client on the Master1 node as an example.

  1. Click the MRS cluster name in the cluster list to go to the dashboard page.
  2. Click Access Manager next to MRS Manager. In the displayed dialog box, select EIP and configure the EIP information.

    For the first access, click Manage EIPs to purchase an EIP on the EIP console. Go back to the Access MRS Manager dialog box, refresh the EIP list, and select the EIP.

  3. Select the confirmation check box and click OK to log in to the FusionInsight Manager of the cluster.

    The username for logging in to FusionInsight Manager is admin, and the password is the one configured during cluster purchase.

  4. On the displayed Homepage page, click next to the cluster name and click Download Client to download the cluster client.

    Figure 2 Downloading the client

    In the Download Cluster Client dialog box, set the following parameters:

    • Set Select Client Type to Complete Client.
    • Retain the default value for Platform Type, for example, x86_64.
    • Retain the default path for Save to Path. The generated file will be saved in the /tmp/FusionInsight-Client directory on the active OMS node of the cluster.
    Figure 3 Downloading the cluster client

    Click OK and wait until the client software is generated.

  5. Go back to the MRS console and click the cluster name in the cluster list. Go to the Nodes tab, click the name of the node that contains master1. In the upper right corner of the ECS details page, click Remote Login to log in to the Master1 node.

    Figure 4 Checking the Master1 node

  6. Log in to the Master1 node as user root. The password is the one you set for the root user during cluster purchase.
  7. Switch to the directory where the client software package is stored and decompress the package.

    cd /tmp/FusionInsight-Client/

    tar -xvf FusionInsight_Cluster_1_Services_Client.tar

    tar -xvf FusionInsight_Cluster_1_Services_ClientConfig.tar

  8. Go to the directory where the installation package is stored and install the client.

    cd FusionInsight_Cluster_1_Services_ClientConfig

    Install the client to a specified directory (an absolute path), for example, /opt/client.

    ./install.sh /opt/client

    ...
    ... component client is installed successfully
    ...
    NOTE:

    A client installation directory will be automatically created if it does not exist. If there is such directory, it must be empty. The directory name cannot contain spaces. The client installation directory can contain only uppercase letters, lowercase letters, digits, and underscores (_).

Step 3: Using the Kafka Client to Create a Topic

  1. In the cluster list, click the name of the target cluster. The dashboard tab is displayed.
  2. On the displayed page, click Synchronize next to IAM User Sync. In the displayed dialog box, select All, and click Synchronize. Wait until the synchronization task is complete.
  3. Go to the Components tab, click ZooKeeper, and then click the Instances tab. Check and record the IP address of a ZooKeeper quorumpeer role instance.

    Figure 5 Checking IP addresses of ZooKeeper role instances

  4. Click Service Configuration and check the value of clientPort, which indicates the ZooKeeper client connection port.
  5. Click Service ZooKeeper to return to the component list.

    Figure 6 Going back to the component list

  6. Click Kafka, and then the Instances tab. Check and record the IP addresses of a Kafka Broker instance.

    Figure 7 Checking the IP address of a broker instance

  7. Click Service Configuration and check the value of port, which indicates the port for connecting to Kafka Broker.
  8. Log in to the node (Master1) where the MRS client is located as user root.
  9. Switch to the client installation directory and configure environment variables.

    cd /opt/client

    source bigdata_env

  10. Create a Kafka topic.

    kafka-topics.sh --create --zookeeper IP address of ZooKeeper role instance:ZooKeeper client connection port /kafka --partitions 2 --replication-factor 2 --topic Topic name

    The following is an example:

    kafka-topics.sh --create --zookeeper 192.168.21.234:2181/kafka --partitions 2 --replication-factor 2 --topic Topic1

    If the following information is displayed, the topic is created:

    Created topic Topic1.

Step 4: Managing Messages in the Kafka Topic

  1. Log in to the node (Master1) where the MRS client is deployed as user root.
  2. Switch to the client installation directory and configure environment variables.

    cd /opt/client

    source bigdata_env

  3. Generate a message in Topic1.

    kafka-console-producer.sh --broker-list IP address of the node where the Kafka Broker role is deployed:Broker connection port --topic Topic name --producer.config /opt/hadoopclient/Kafka/kafka/config/producer.properties

    For the IP address and port number of the node where the Kafka Broker instance is deployed, see 6 and 7 in Step 3: Using the Kafka Client to Create a Topic.

    The following is an example:

    kafka-console-producer.sh --broker-list 192.168.21.21:9092 --topic Topic1 --producer.config /opt/client/Kafka/kafka/config/producer.properties

  4. Open a new client connection window.

    cd /opt/client

    source bigdata_env

  5. Consume messages in Topic1.

    kafka-console-consumer.sh --topic Topic name --bootstrap-server IP address of the node where the Kafka Broker role is deployed:Broker connection port --consumer.config /opt/client/Kafka/kafka/config/consumer.properties

    The following is an example:

    kafka-console-consumer.sh --topic Topic1 --bootstrap-server 192.168.21.21:9092 --consumer.config /opt/client/Kafka/kafka/config/consumer.properties

  6. Enter some content in the command line that generates messages in 3. The content is used as the messages generated by the producer. Press Enter to send the message.

    The following is an example:

    >aaa
    >bbb
    >ccc

    To stop generating messages, press Ctrl+C to exit.

  7. In the message consuming window of 5, check whether the messages are consumed.

    aaa
    bbb
    ccc

Follow-up Operations: Releasing Resources

To avoid additional expenditures, release resources promptly if you no longer need them. For details, see Deleting an MRS Cluster.

Related Information

For information about Kafka permission management, topic management and message consumption, HA configuration, and data balancing, see Using Kafka.

Usamos cookies para aprimorar nosso site e sua experiência. Ao continuar a navegar em nosso site, você aceita nossa política de cookies. Saiba mais

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback