Creating a FlinkServer Job to Interconnect with a Kafka Message Queue
This section applies to MRS 3.1.2 or later clusters.
Scenarios
This section describes the data definition language (DDL) of Kafka as a source or sink table, as well as the WITH parameters and example code for creating a table, and provides guidance on how to perform operations on the FlinkServer job management page.
If your Kafka cluster is in security mode, the following example SQL statements can be used.
Prerequisites
- The HDFS, Yarn, Kafka, and Flink services have been installed in a cluster.
- The client that contains the Kafka service has been installed, for example, in the /opt/client directory.
- You have created a user assigned with the FlinkServer Admin Privilege (for example, flink_admin) for accessing the Flink web UI by referring to Creating a FlinkServer Role.
Creating a Job
- Log in to Manager as user flink_admin and choose Cluster > Services > Flink. In the Basic Information area, click the link on the right of Flink WebUI to access the Flink web UI.
- Create a Flink SQL stream job by referring to Creating a Job. On the job development page, configure the job parameters as follows and start the job.
In Basic Parameter, select Enable CheckPoint, set Time Interval(ms) to 60000, and retain the default value for Mode.
CREATE TABLE KafkaSource ( `user_id` VARCHAR, `user_name` VARCHAR, `age` INT ) WITH ( 'connector' = 'kafka', 'topic' = 'test_source', 'properties.bootstrap.servers' = 'Service IP address of the Kafka Broker instance:Kafka port', 'properties.group.id' = 'testGroup', 'scan.startup.mode' = 'latest-offset', 'format' = 'csv', 'properties.sasl.kerberos.service.name' = 'kafka',--This parameter is not required for clusters in normal mode. Delete the comma (,) in the previous line. 'properties.security.protocol' = 'SASL_PLAINTEXT',--This parameter is not required for clusters in normal mode. 'properties.kerberos.domain.name' = 'hadoop.System domain name'--This parameter is not required for clusters in normal mode. ); CREATE TABLE KafkaSink( `user_id` VARCHAR, `user_name` VARCHAR, `age` INT ) WITH ( 'connector' = 'kafka', 'topic' = 'test_sink', 'properties.bootstrap.servers' = 'Service IP address of the Kafka Broker instance:Kafka port', 'scan.startup.mode' = 'latest-offset', 'value.format' = 'csv', 'properties.sasl.kerberos.service.name' = 'kafka',--This parameter is not required for clusters in normal mode. Delete the comma (,) in the previous line. 'properties.security.protocol' = 'SASL_PLAINTEXT',--This parameter is not required for clusters in normal mode. 'properties.kerberos.domain.name' = 'hadoop.System domain name'--This parameter is not required for clusters in normal mode. ); Insert into KafkaSink select * from KafkaSource;
- The Kafka Broker instance IP address and Kafka port number are as follows:
- To obtain the instance IP address, log in to FusionInsight Manager, choose Cluster > Services > Kafka, click Instances, and query the instance IP address on the instance list page.
- Value of sasl.port when Authentication Mode of the cluster is Security Mode, 21007 by default.
- Value of port when Authentication Mode of the cluster is Normal Mode, 9092 by default. If the port number is set to 9092, set allow.everyone.if.no.acl.found to true. The procedure is as follows:
Log in to FusionInsight Manager and choose Cluster > Services > Kafka. On the page that is displayed, click the Configurations tab then the All Configurations sub-tab. On the displayed page, search for allow.everyone.if.no.acl.found, set it to true, and click Save.
- System domain name: You can log in to FusionInsight Manager, choose System > Permission > Domain and Mutual Trust, and check the value of Local Domain.
- You need to restart Flink jobs after expanding a Kafka Topic partition if you are using Flink 1.15.0 or an earlier version. Otherwise, new partitions may not be detected and consumption data may be missed. Alternatively, you can enable Kafka Topic partition detection in Flink.
You can add the scan.topic-partition-discovery.interval parameter to the WITH property of the SQL Kafka source table and set the parameter to a dynamic refresh interval, for example, 5min.
- The Kafka Broker instance IP address and Kafka port number are as follows:
- On the job management page, check whether the job status is Running.
- Run the following command to check whether data is received in the sink table, that is, check whether data is properly written to the Kafka topic after 5 is performed. For details, see Managing Messages in Kafka Topics.
sh kafka-console-consumer.sh --topic test_sink --bootstrap-server Service IP address of the Kafka Broker instance:Kafka port --consumer.config /opt/client/Kafka/kafka/config/consumer.properties
- View the topic and write data to the Kafka topic by referring to Managing Messages in Kafka Topics. After the data is written, view the execution result in the window in 4.
Check the Kafka topic.
./kafka-topics.sh --list --bootstrap-server Service IP address of the Kafka Broker instance:Kafka port --command-config Client directory/Kafka/kafka/config/client.properties
Write data to Kafka.
sh kafka-console-producer.sh --broker-list IP address of the node where the Kafka instance is deployed:Kafka port --topic Topic name --producer.config Client directory/Kafka/kafka/config/producer.properties
In this example, the topic name is test_source.
sh kafka-console-producer.sh --broker-list IP address of the node where the Kafka instance is deployed:Kafka port --topic test_source --producer.config /opt/client/Kafka/kafka/config/producer.properties
Enter the message content.1,clw,33
Press Enter to send the message.
WITH Parameters
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
connector |
Yes |
String |
Connector to be used. kafka is used for Kafka. |
topic |
|
String |
Topic name.
|
topic-pattern |
No (Kafka functions as a source table.) |
String |
Topic pattern. This parameter is available when Kafka is used as a source table. The topic name must be a regular expression. topic-pattern and topic cannot be set at the same time. |
properties.bootstrap.servers |
Yes |
String |
List of Kafka brokers, which are separated by commas (,). |
properties.group.id |
Yes (Kafka functions as a source table.) |
String |
Kafka user group ID. |
format |
Yes |
String |
Format used to deserialize and serialize the value part of Kafka messages. |
properties.* |
No |
String |
Authentication-related parameters that need to be added in security mode. |
scan.topic-partition-discovery.interval |
No |
Duration |
Interval at which the consumer dynamically discovers the created partition. The default value is 5 minutes. |
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot