Updated on 2024-11-29 GMT+08:00

Interconnecting FlinkServer with Kafka

Scenario

This section describes the data definition language (DDL) of Kafka as a source or sink table, as well as the WITH parameters and example code for creating a table, and provides guidance on how to perform operations on the FlinkServer job management page.

Kafka in security mode is used as an example.

Prerequisites

  • The HDFS, Yarn, Kafka, and Flink services have been installed in a cluster.
  • The client that contains the Kafka service has been installed in a directory, for example, /opt/client.
  • You have created a user assigned with the FlinkServer Admin Privilege (for example, flink_admin) for accessing the Flink web UI by referring to Creating a FlinkServer Role.

Procedure

  1. Log in to Manager as user flink_admin and choose Cluster > Services > Flink. In the Basic Information area, click the link on the right of Flink WebUI to access the Flink web UI.
  2. Create a Flink SQL job by referring to Creating a Job. On the job development page, configure the job parameters as follows and start the job.

    In Basic Parameter, select Enable CheckPoint, set Time Interval(ms) to 60000, and retain the default value for Mode.
    CREATE TABLE KafkaSource (
      `user_id` VARCHAR,
      `user_name` VARCHAR,
      `age` INT
    ) WITH (
      'connector' = 'kafka',
      'topic' = 'test_source',
      'properties.bootstrap.servers' = 'IP address of the Kafka broker instance:Kafka port number',
      'properties.group.id' = 'testGroup',
      'scan.startup.mode' = 'latest-offset',
      'format' = 'csv',
      'properties.sasl.kerberos.service.name' = 'kafka',
      'properties.security.protocol' = 'SASL_PLAINTEXT',
      'properties.kerberos.domain.name' = 'hadoop.System domain name'
    );
    CREATE TABLE KafkaSink(
      `user_id` VARCHAR,
      `user_name` VARCHAR,
      `age` INT
    ) WITH (
      'connector' = 'kafka',
      'topic' = 'test_sink',
      'properties.bootstrap.servers' = 'IP address of the Kafka broker instance:Kafka port number',
      'scan.startup.mode' = 'latest-offset',
      'value.format' = 'csv',
      'properties.sasl.kerberos.service.name' = 'kafka',
      'properties.security.protocol' = 'SASL_PLAINTEXT',
      'properties.kerberos.domain.name' = 'hadoop.System domain name'
    );
    Insert into
      KafkaSink
    select
      *
    from
      KafkaSource;
    • Kafka port
      • If Kerberos authentication is enabled for the cluster (the cluster is in security mode), the Broker port number is the value of sasl.port. The default value is 21007.
      • If Kerberos authentication is disabled for the cluster (the cluster is in normal mode), the broker port number is the value of port. The default value is 9092. If the port number is set to 9092, set allow.everyone.if.no.acl.found to true. The procedure is as follows:

        Log in to FusionInsight Manager and choose Cluster > Services > Kafka. Click Configurations then All Configurations. On the page that is displayed, search for allow.everyone.if.no.acl.found, set it to true, and click Save.

    • System domain name: You can log in to FusionInsight Manager, choose System > Permission > Domain and Mutual Trust, and check the value of Local Domain.

  3. On the job management page, check whether the job status is Running.
  4. Run the following command to check whether data is received in the sink table, that is, check whether data is properly written to the Kafka topic after 5 is performed. For details, see Managing Messages in Kafka Topics.

    sh kafka-console-consumer.sh --topic test_sink --bootstrap-server Service IP address of the Kafka broker instance:Kafka port number --consumer.config /opt/client/Kafka/kafka/config/consumer.properties

  5. View the topic and write data to the Kafka topic by referring to Managing Messages in Kafka Topics. After the data is written, view the execution result in the window in 4.

    ./kafka-topics.sh --list --zookeeper IP address of the ZooKeeper quorumpeer instance:ZooKeeper port number/kafka

    sh kafka-console-producer.sh --broker-list IP address of the node where Kafka instances reside:Kafka port number --topic Topic name --producer.config Client directory/Kafka/kafka/config/producer.properties

    For example, if the topic name is test_source, the script is sh kafka-console-producer.sh --broker-list IP address of the node where the Kafka instance is located:Kafka port number --topic test_source --producer.config /opt/client/Kafka/kafka/config/producer.properties

    Enter the message content.
    1,clw,33

    Press Enter to send the message.

    • IP address of the ZooKeeper quorumpeer instance

      To obtain IP addresses of all ZooKeeper quorumpeer instances, log in to FusionInsight Manager and choose Cluster > Services > ZooKeeper. On the displayed page, click Instance and view the IP addresses of all the hosts where the quorumpeer instances locate.

    • Port number of the ZooKeeper client

      Log in to FusionInsight Manager and choose Cluster > Service > ZooKeeper. On the displayed page, click Configurations and check the value of clientPort.

WITH Parameters

Table 1 WITH Parameters

Parameter

Mandatory

Type

Description

connector

Yes

String

Connector to be used. kafka is used for Kafka.

topic

  • Yes (Kafka functions as a sink table.)
  • No (Kafka functions as a source table.)

String

Topic name.

  • When the Kafka is used as a source table, this parameter indicates the name of the topic from which data is read. Topic list is supported. Topics are separated by semicolons (;), for example, Topic-1; Topic-2.
  • When Kafka is used as a sink table, this parameter indicates the name of the topic to which data is written. Topic list is not supported for sinks.

topic-pattern

No (Kafka functions as a source table.)

String

Topic pattern.

This parameter is available when Kafka is used as a source table. The topic name must be a regular expression.

NOTE:

topic-pattern and topic cannot be set at the same time.

properties.bootstrap.servers

Yes

String

List of Kafka brokers, which are separated by commas (,).

properties.group.id

Yes (Kafka functions as a source table.)

String

Kafka user group ID.

format

Yes

String

Format of the value used for deserializing and serializing Kafka messages.

properties.*

No

String

Authentication-related parameters that need to be added in security mode.