Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive

Synchronizing Kafka Data to ClickHouse

Updated on 2024-07-19 GMT+08:00

This section describes how to create a Kafka table to automatically synchronize Kafka data to the ClickHouse cluster.

Prerequisites

  • You have created a Kafka cluster. The Kafka client has been installed.
  • You have created a ClickHouse cluster and installed the ClickHouse client. The ClickHouse and Kafka clusters are in the same VPC and can communicate with each other.

Constraints

Currently, ClickHouse cannot interconnect with Kafka clusters with security mode enabled.

Syntax of the Kafka Table

  • Syntax
    CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
    (
        name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1],
        name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2],
        ...
    ) ENGINE = Kafka()
    SETTINGS
        kafka_broker_list = 'host1:port1,host2:port2',
        kafka_topic_list = 'topic1,topic2,...',
        kafka_group_name = 'group_name',
        kafka_format = 'data_format';
        [kafka_row_delimiter = 'delimiter_symbol',]
        [kafka_schema = '',]
        [kafka_num_consumers = N]
  • Parameter description
    Table 1 Kafka table parameters

    Parameter

    Mandatory

    Description

    kafka_broker_list

    Yes

    A list of Kafka broker instances, separated by comma (,). For example, IP address 1 of the Kafka broker instance:9092,IP address 2 of the Kafka broker instance:9092,IP address 3 of the Kafka broker instance:9092.

    NOTE:

    If the Kerberos authentication is enabled, parameter allow.everyone.if.no.acl.found must be set to true if port 21005 is used. Otherwise, an error will be reported.

    To obtain the IP address of the Kafka broker instance, perform the following steps:

    kafka_topic_list

    Yes

    A list of Kafka topics.

    kafka_group_name

    Yes

    A group of Kafka consumers, which can be customized.

    kafka_format

    Yes

    Kafka message format, for example, JSONEachRow, CSV, and XML.

    kafka_row_delimiter

    No

    Delimiter character, which ends a message.

    kafka_schema

    No

    Parameter that must be used if the format requires a schema definition.

    kafka_num_consumers

    No

    Number of consumers in per table. The default value is 1. If the throughput of a consumer is insufficient, more consumers are required. The total number of consumers cannot exceed the number of partitions in a topic because only one consumer can be allocated to each partition.

How to Synchronize Kafka Data to ClickHouse

  1. Switch to the Kafka client installation directory. For details, see Using the Kafka Client.

    1. Log in to the node where the Kafka client is installed as the Kafka client installation user.
    2. Run the following command to go to the client installation directory:

      cd /opt/client

    3. Run the following command to configure environment variables:

      source bigdata_env

    4. If Kerberos authentication is enabled for the current cluster, run the following command to authenticate the current user. If Kerberos authentication is disabled for the current cluster, skip this step.

      For an MRS 3.1.0 cluster, run the export CLICKHOUSE_SECURITY_ENABLED=true command first.

      kinit Component service user

  2. Run the following command to create a Kafka topic. For details, see Managing Kafka Topics.

    kafka-topics.sh --topic kafkacktest2 --create --zookeeper IP address of the Zookeeper role instance:Port used by ZooKeeper to listen to client/kafka --partitions 2 --replication-factor 1

    NOTE:
    • --topic is the name of the topic to be created, for example, kafkacktest2.
    • --zookeeper is the IP address of the node where the ZooKeeper role instances are located, which can be the IP address of any of the three role instances. You can obtain the IP address of the node by performing the following steps:
    • --partitions and --replication-factor are the topic partitions and topic backup replicas, respectively. The number of the two parameters cannot exceed the number of Kafka role instances.
    • To obtain the Port used by ZooKeeper to listen to client, log in to FusionInsight Manager, click Cluster, choose Services > ZooKeeper, and view the value of clientPort on the Configuration tab page. The default value is 24002.

  3. Log in to the ClickHouse client by referring to Using ClickHouse from Scratch.

    1. Run the following command to go to the client installation directory:

      cd /opt/client

    2. Run the following command to configure environment variables:

      source bigdata_env

    3. If Kerberos authentication is enabled for the current cluster, run the following command to authenticate the current user. The user must have the permission to create ClickHouse tables. Therefore, you need to bind the corresponding role to the user. For details, see ClickHouse User and Permission Management. If Kerberos authentication is disabled for the current cluster, skip this step.

      kinit Component service user

      Example: kinit clickhouseuser

    4. Run the following command to connect to the ClickHouse instance node to which data is to be imported:

      clickhouse client --host IP address of the ClickHouse instance --user Login username --password --port ClickHouse port number --database Database name --multiline

      Enter the user password.

  4. Create a Kafka table in ClickHouse by referring to Syntax of the Kafka Table. For example, the following table creation statement is used to create a Kafka table whose name is kafka_src_tbl3, topic name is kafkacktest2, and message format is JSONEachRow in the default database.

    create table kafka_src_tbl3 on cluster default_cluster 
    (id UInt32, age UInt32, msg String)  
    ENGINE=Kafka() 
    SETTINGS 
     kafka_broker_list='IP address 1 of the Kafka broker instance:9092,IP address 2 of the Kafka broker instance:9092,IP address 3 of the Kafka broker instance:9092',
     kafka_topic_list='kafkacktest2',
     kafka_group_name='cg12',
     kafka_format='JSONEachRow';

  5. Create a ClickHouse replicated table, for example, the ReplicatedMergeTree table named kafka_dest_tbl3.

    create table kafka_dest_tbl3 on cluster default_cluster 
    ( id UInt32, age UInt32, msg String )
    engine = ReplicatedMergeTree('/clickhouse/tables/{shard}/default/kafka_dest_tbl3', '{replica}')
    partition by age 
    order by id;

  6. Create a materialized view, which converts data in Kafka in the background and saves the data to the created ClickHouse table.

    create materialized view consumer3 on cluster default_cluster to kafka_dest_tbl3 as select * from kafka_src_tbl3;

  7. Perform 1 again to go to the Kafka client installation directory.
  8. Run the following command to send a message to the topic created in 2:

    kafka-console-producer.sh --broker-list IP address 1 of the kafka broker instance:9092,IP address 2 of the kafka broker instance:9092,IP address 3 of the kafka broker instance:9092 --topic kafkacktest2
    >{"id":31, "age":30, "msg":"31 years old"}
    >{"id":32, "age":30, "msg":"31 years old"}
    >{"id":33, "age":30, "msg":"31 years old"}
    >{"id":35, "age":30, "msg":"31 years old"}

  9. Use the ClickHouse client to log in to the ClickHouse instance node in 3 and query the ClickHouse table data, for example, to query the replicated table kafka_dest_tbl3. It shows that the data in the Kafka message has been synchronized to this table.

    select * from kafka_dest_tbl3;

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback