Interconnecting ClickHouse with Kafka Through Kerberos Authentication
Scenarios
This section describes how to connect ClickHouse to Kafka using Kerberos authentication and how to consume Kafka data.
Notes and Constraints
This section applies only to MRS 3.3.0-LTS or later.
Prerequisites
- A Kafka cluster has been created and is in security mode (Kerberos authentication is enabled).
- The cluster client has been installed.
- If ClickHouse and Kafka are not in the same cluster, establish cross-cluster mutual trust between them. For details, see Configuring Cross-Manager Mutual Trust Between Clusters.
Procedure
- Log in to the FusionInsight Manager of the cluster where Kafka is deployed, choose System > Permission > User > Create User, and create a human-machine user with the Kafka permission. For example, create a human-machine user ck_user1. For details about Kafka user permission, see Kafka User Permissions.
- Choose System > Permission > User. On the displayed page, locate the ck_user1 user, and click More > Download Authentication Credential in the Operation column of the user. Save the file and decompress it to obtain the user.keytab and krb5.conf files. Rename the user.keytab file clickhouse_to_kafka.keytab.
- Log in to the FusionInsight Manager of the cluster where ClickHouse is deployed, and choose Cluster > Services > ClickHouse. Click Configurations and then All Configurations, and choose ClickHouseServer(Role) > Engine. The following table shows the parameter needs to be configured.
Parameter
Description
kafka.sasl_kerberos_principal
Principal for connecting to Kafka. The value is in username@domain name format. The username is the one created in 1, and the domain name is the system domain name.
kafka.sasl_kerberos_realm
Domain name of the Kafka cluster
kafka.sasl_mechanisms
SASL authentication for connecting to Kafka. The parameter value is GSSAPI.
kafka_auth_mode
Authentication mode for the ClickHouse to connect to the Kafka. Set this parameter to Kerberos.
sasl_kerberos_keytab
Authentication file for connecting to Kafka, which is the clickhouse_to_kafka.keytab file uploaded in 2.
- Click Save. In the displayed dialog box, click OK to save the configuration. Choose Instances, select ClickHouseServer, and click More > Instance Rolling Restart.
- Go to the Kafka client installation directory. For details, see Using the Kafka Client.
- Log in to the node where the Kafka client is installed as the Kafka client installation user.
- Run the following command to go to the client installation directory:
cd /opt/client
- Run the following command to configure environment variables:
source bigdata_env
- Run the following command to authenticate the user:
kinit Component service user
- Run the following command to create a Kafka topic. For details, see Creating a Kafka Topic.
kafka-topics.sh --topic topic1 --create --bootstrap-server <Kafka cluster IP address:9092> --command-config Kafka/kafka/config/client.properties --partitions 2 --replication-factor 1
Table 1 Parameter description Parameter
Description
--topic
Name of the topic to be created, for example, topic1.
--bootstrap-server
IP address of the node in the Kafka cluster. To obtain the information, perform the following operations:
Log in to FusionInsight Manager and choose Cluster > Services > Kafka. On the displayed the Instances tab page, click Instances to query the IP address of the Broker role instance.
--partitions
Number of topic partitions. The value cannot be greater than the number of Kafka role instances.
-replication-factor
Number of topic replicas. The value cannot be greater than the number of Kafka role instances.
- Log in to the ClickHouse client node and connect it to the ClickHouse server. For details, see ClickHouse Client Practices.
- Create a Kafka table engine. The following is an example:
CREATE TABLE queue1 ( key String, value String, event_date DateTime ) ENGINE = Kafka() SETTINGS kafka_broker_list = 'kafka_ip1:21007,kafka_ip2:21007,kafka_ip3:21007', kafka_topic_list = 'topic1', kafka_group_name = 'group2', kafka_format = 'CSV', kafka_row_delimiter = '\n', kafka_handle_error_mode='stream';
The following table lists the related parameters.
Parameter
Description
kafka_broker_list
A list of IP addresses and port numbers of Kafka Broker instances. For example, :IP address 1 of Kafka broker instance:9092,IP address 2 of Kafka broker instance:9092,IP address 3 of Kafka broker instance:9092
To obtain the IP address of the Kafka broker instance, perform the following steps:
Log in to FusionInsight Manager and choose Cluster > Services > Kafka. Click the Instance tab to query the IP addresses of the Kafka instances.
kafka_topic_list
Topic where Kafka data is consumed
kafka_group_name
Kafka consumer group
kafka_format
Formatting type of consumed data. JSONEachRow indicates the JSON format (a piece of data in each line). CSV indicates the data is in a line but separated by commas (,).
kafka_row_delimiter
Delimiter character, which ends a message.
kafka_handle_error_mode
If this parameter is set to stream, each message processing exception is printed. You need to create a view and query the specific exception of abnormal data through the view.
The following example shows you how to create a view:
CREATE MATERIALIZED VIEW default.kafka_errors2 ( `topic` String, `key` String, `partition` Int64, `offset` Int64, `timestamp` Date, `timestamp_ms` Int64, `raw` String, `error` String ) ENGINE = MergeTree ORDER BY (topic, partition, offset) SETTINGS index_granularity = 8192 AS SELECT _topic AS topic, _key AS key, _partition AS partition, _offset AS offset, _timestamp AS timestamp, _timestamp_ms AS timestamp_ms, _raw_message AS raw, _error AS error FROM default.queue1;
Query the view. The following is an example:
host1 :) select * from kafka_errors2; SELECT * FROM kafka_errors2 Query id: bf4d788f-bcb9-44f5-95d0-a6c83c591ddb ┌─topic──┬─key─┬─partition─┬─offset─┬──timestamp─┬─timestamp_ms─┬─raw─┬─error────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐ │ topic1 │ │ 1 │ 8 │ 2023-06-20 │ 1687252213 │ 456 │ Cannot parse date: value is too short: (at row 1) Buffer has gone, cannot extract information about what has been parsed. │ └────────┴─────┴───────────┴────────┴────────────┴──────────────┴─────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘ 1 rows in set. Elapsed: 0.003 sec. host1 :)
kafka_skip_broken_messages
(Optional) Number of Kafka data records where parsing exceptions are ignored. If N exceptions occur and the background thread ends, the materialized view is re-arranged to monitor the data.
kafka_num_consumers
(Optional) Number of consumers of a single Kafka engine. You can set this parameter to a larger value to improve the data consumption throughput. But the maximum value of this parameter cannot exceed the total number of partitions of the corresponding topic.
For details about other configurations, see https://clickhouse.com/docs/en/engines/table-engines/integrations/kafka.
- Connect the client to ClickHouse to create a local table. The following is an example:
CREATE TABLE daily1( key String, value String, event_date DateTime )ENGINE = MergeTree() ORDER BY key;
- Connect the client to ClickHouse to create a materialized view. The following is an example:
CREATE MATERIALIZED VIEW default.consumer1 TO default.daily1 ( `event_date` DateTime, `key` String, `value` String ) AS SELECT event_date, key, value FROM default.queue1;
- Perform 5 again to go to the Kafka client installation directory.
- Run the following command to send a message to the topic created in 6:
kafka-console-producer.sh --broker-list IP address 1 of the Kafka Broker instance:9092,IP address 2 of the Kafka Broker instance:9092,IP address 3 of the Kafka Broker instance:9092 --topic topic1 --producer.config Kafka/kafka/config/producer.properties
The result is as follows:>a1,b1,'2020-08-01 10:00:00' >a2,b2,'2020-08-02 10:00:00' >a3,b3,'2020-08-02 10:00:00' >a4,b4,'2023-09-02 10:00:00'
- Query the consumed Kafka data and the preceding materialized view. The following is an example:
select * from daily1;
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot