Updated on 2024-11-29 GMT+08:00

Interconnecting FlinkServer with ClickHouse

Scenario

Flink interconnects with the ClickHouseBalancer instance of ClickHouse to read and write data, preventing ClickHouse traffic distribution problems.

When "FlinkSQL" is displayed in the command output on the FlinkServer web UI, the password field in the SQL statement is left blank. Before you submit a job, manually enter the password.

Prerequisites

  • Services such as ClickHouse, HDFS, Yarn, Flink, and Kafka have been installed in the cluster. A logical cluster in ClickHouse exists and is running properly.
  • The cluster client has been installed in a directory, for example, /opt/client.
  • Kerberos authentication (security mode) has been enabled for the cluster. A user who has the permission to create ClickHouse data tables and FlinkServer jobs, as well as perform Kafka operations on FusionInsight Manager has been created.

Mapping Between Flink SQL and ClickHouse Data Types

Flink SQL Data Type

ClickHouse Data Type

BOOLEAN

UInt8

TINYINT

Int8

SMALLINT

Int16

INTEGER

Int32

BIGINT

Int64

FLOAT

Float32

DOUBLE

Float64

CHAR

String

VARCHAR

String

VARBINARY

FixedString

DATE

Date

TIMESTAMP

DateTime

DECIMAL

Decimal

Procedure

  1. Log in to the node where the client is installed as user root.
  2. Run the following command to go to the client installation directory:

    cd /opt/client

  3. Run the following command to configure environment variables:

    source bigdata_env

  4. If Kerberos authentication is enabled for the current cluster (the cluster is in security mode), run the following command to authenticate the user. The user must have the permission to create ClickHouse tables. If Kerberos authentication is disabled for the current cluster (the cluster is in normal mode), skip this step.

    kinit Component service user

    Example: kinit clickhouseuser

  5. Run the following commands to connect to the ClickHouse server:

    • Clusters with Kerberos authentication disabled (normal mode)

      clickhouse client --host IP address of the ClickHouseServer instance to be connected --user Username --password Password--port ClickHouse port number --multiline

      Commands carrying authentication passwords pose security risks. Disable historical command recording before running such commands to prevent information leakage.

    • Clusters with Kerberos authentication enabled (security mode)

      clickhouse client --host IP address of the ClickHouse instance to be connected --port ClickHouse port number --secure --multiline

    For more operations on the ClickHouse client, see Using ClickHouse from Scratch.

  6. Run commands to create a replication table and a distributed table.

    1. For example, if the name of the replication table is default.test1 and the name of the connected ClickHouse logical cluster is default_cluster, run the following commands:

      CREATE TABLE default.test1 on cluster default_cluster

      (

      `pid` Int8,

      `uid` UInt8,

      `Int_16` Int16,

      `Int_32` Int32,

      `Int_64` Int64,

      `String_x` String,

      `String_y` String,

      `float_32` Float32,

      `float_64` Float64,

      `Decimal_x` Decimal32(2),

      `Date_x` Date,

      `DateTime_x` DateTime

      )

      ENGINE = ReplicatedReplacingMergeTree('/clickhouse/tables/{shard}/{Database name}/test1','{replica}')

      PARTITION BY pid

      ORDER BY (pid, DateTime_x);

    2. Create a distributed table test1_all.

      CREATE TABLE test1_all on cluster default_cluster

      (

      `pid` Int8,

      `uid` UInt8,

      `Int_16` Int16,

      `Int_32` Int32,

      `Int_64` Int64,

      `String_x` String,

      `String_y` String,

      `float_32` Float32,

      `float_64` Float64,

      `Decimal_x` Decimal32(2),

      `Date_x` Date,

      `DateTime_x` DateTime

      )

      ENGINE = Distributed(default_cluster, default, test1, rand());

  7. Log in to FusionInsight Manager as a user who has the FlinkServer operation permission. Choose Cluster > Services > Flink. On the page that is displayed, click the link next to Flink WebUI to access the FlinkServer web UI.

    For details about FlinkServer permissions, see Flink Web UI Permission Management.

  8. On the Jobs tab page, click Create to create a Flink SQL job and submit it.

    Set Type to Stream, configure parameters on the job development page by referring to the following example statements, and submit the job. In Basic Parameter, select Enable CheckPoint, set Time Interval(ms) to 60000, and retain the default value for Mode.

    For details about Flink server job parameters, see Creating a Job.

    • If Kerberos authentication has been enabled for the current MRS cluster (the cluster is in security mode), example statements are as follows:
      create table kafkasource(
      `pid` TINYINT,
      `uid` BOOLEAN,
      `Int_16` SMALLINT,
      `Int_32` INTEGER,
      `Int_64` BIGINT,
      `String_x` CHAR,
      `String_y` VARCHAR(10),
      `float_32` FLOAT,
      `float_64` DOUBLE,
      `Decimal_x` DECIMAL(9,2),
      `Date_x` DATE,
      `DateTime_x` TIMESTAMP
      ) with(
        'connector' = 'kafka',
        'topic' = 'input',
        'properties.bootstrap.servers' = 'IP address of Kafka Broker instance service1:Kafka port number,IP address 2 of Kafka Broker instance service:Kafka port number,IP address 3 of Kafka Broker instance service:Kafka port number',
        'properties.group.id' = 'group1',
        'scan.startup.mode' = 'earliest-offset',
        'format' = 'json',
        'properties.sasl.kerberos.service.name' = 'kafka',
        'properties.security.protocol' = 'SASL_PLAINTEXT',
        'properties.kerberos.domain.name' = 'hadoop.System domain name'
      );
      CREATE TABLE cksink (
      `pid` TINYINT,
      `uid` BOOLEAN,
      `Int_16` SMALLINT,
      `Int_32` INTEGER,
      `Int_64` BIGINT,
      `String_x` CHAR,
      `String_y` VARCHAR(10),
      `float_32` FLOAT,
      `float_64` DOUBLE,
      `Decimal_x` DECIMAL(9,2),
      `Date_x` DATE,
      `DateTime_x` TIMESTAMP
      ) WITH (
      'connector' = 'jdbc',
      'url' = 'jdbc:clickhouse://IP address 1 of the ClickHouseBalancer instance service:ClickHouseBalancer port number,IP address 2 of the ClickHouseBalancer instance service:ClickHouseBalancer port number/default?ssl=true&sslmode=none',
      'username' = 'ClickHouse user',
      'password' = 'ClickHouse user password',
      'table-name' = 'test1_all',
      'driver' = 'com.clickhouse.ClickHouseDriver',
      'sink.buffer-flush.max-rows' = '0',
      'sink.buffer-flush.interval' = '60s'
      );
      Insert into cksink 
      select
      * 
      from 
      kafkasource;
    • If Kerberos authentication is disabled for the current MRS cluster (the cluster is in normal mode), example statements are as follows:
      create table kafkasource(
      `pid` TINYINT,
      `uid` BOOLEAN,
      `Int_16` SMALLINT,
      `Int_32` INTEGER,
      `Int_64` BIGINT,
      `String_x` CHAR,
      `String_y` VARCHAR(10),
      `float_32` FLOAT,
      `float_64` DOUBLE,
      `Decimal_x` DECIMAL(9,2),
      `Date_x` DATE,
      `DateTime_x` TIMESTAMP
      ) with(
        'connector' = 'kafka',
        'topic' = 'kinput',
        'properties.bootstrap.servers' = 'IP address 1 of Kafka Broker instance service:Broker port number,IP address 2 of Kafka Broker instance service:Kafka port number,IP address 3 of Kafka Broker instance service:Kafka port number',
        'properties.group.id' = 'kafka_test',
        'scan.startup.mode' = 'earliest-offset',
        'format' = 'json'
      );
      CREATE TABLE cksink (
      `pid` TINYINT,
      `uid` BOOLEAN,
      `Int_16` SMALLINT,
      `Int_32` INTEGER,
      `Int_64` BIGINT,
      `String_x` CHAR,
      `String_y` VARCHAR(10),
      `float_32` FLOAT,
      `float_64` DOUBLE,
      `Decimal_x` DECIMAL(9,2),
      `Date_x` DATE,
      `DateTime_x` TIMESTAMP
      ) WITH (
      'connector' = 'jdbc',
      'url' = 'jdbc:clickhouse://IP address 1 of the ClickHouseBalancer instance service:ClickHouseBalancer port number,IP address 2 of the ClickHouseBalancer instance service:ClickHouseBalancer port number/default',
      'table-name' = 'test1_all',
      'driver' = 'com.clickhouse.ClickHouseDriver',
      'sink.buffer-flush.max-rows' = '0',
      'sink.buffer-flush.interval' = '60s'
      );
      Insert into cksink 
      select
      * 
      from 
      kafkasource;
    • The IP address and port number of the Kafka broker instance are as follows:
      • To obtain the instance IP address, log in to FusionInsight Manager, choose Cluster > Services > Kafka, click Instance, and query the instance IP address on the instance list page.
      • If Kerberos authentication is enabled for the cluster (the cluster is in security mode), the Broker port number is the value of sasl.port. The default value is 21007.
      • If Kerberos authentication is disabled for the cluster (the cluster is in normal mode), the broker port number is the value of port. The default value is 9092. If the port number is set to 9092, set allow.everyone.if.no.acl.found to true. The procedure is as follows:

        Log in to FusionInsight Manager and choose Cluster > Services > Kafka. Click Configurations then All Configurations. On the page that is displayed, search for allow.everyone.if.no.acl.found, set it to true, and click Save.

    • System domain name: You can log in to FusionInsight Manager, choose System > Permission > Domain and Mutual Trust, and check the value of Local Domain.
    • The IP address and port number of the ClickHouseBalancer instance are as follows:

      To obtain the instance IP address, log in to FusionInsight Manager, choose Cluster > Services > ClickHouse, click Instance, and query the instance IP address on the instance list page.

      Select the ClickHouseBalancer port number based on the interconnected ClickHouse logical cluster. Log in to FusionInsight Manager, choose Cluster > Services > ClickHouse, click Logic Cluster, and view HTTP Balancer Port.

      You can configure multiple IP addresses for ClickHouseBalancer instances to avoid single points of failure (SPOFs) of the instances.

    • Kerberos authentication has been enabled for the cluster (the cluster is in security mode). The username and password parameters in the created cksink table must be set to the user who has the operation permission on the ClickHouse table and the password. For details, see ClickHouse User and Permission Management.
    • DELETE messages generated during Flink computing are filtered out when data is written to ClickHouse.
    • Parameters for batch write: Flink stores data in the memory and then flushes the data to the database table when the trigger condition is met. The configurations are as follows:

      sink.buffer-flush.max-rows: Number of rows written to ClickHouse. The default value is 100.

      sink.buffer-flush.interval: Interval for batch write. The default value is 1s.

      If either of the two conditions is met, a sink operation is triggered. That is, data will be flushed to the database table.

      • Scenario 1: sink every 60s

        'sink.buffer-flush.max-rows' = '0',

        'sink.buffer-flush.interval' = '60s'

      • Scenario 2: sink every 100 rows

        'sink.buffer-flush.max-rows' = '100',

        'sink.buffer-flush.interval' = '0s'

      • Scenario 3: no sink

        'sink.buffer-flush.max-rows' = '0',

        'sink.buffer-flush.interval' = '0s'

  9. View the Flink job management page and wait until the job status changes to Running.
  10. Use the Kafka client to write data to the Kafka topic.

    cd /opt/client

    source bigdata_env

    kinit Kafka user (You do not need to run this kinit command if Kerberos authentication is disabled for the cluster (the cluster is in normal mode).)

    cd Kafka/kafka/bin

    sh kafka-console-producer.sh --broker-list Service IP address of the Kafka broker instance:Broker port number --topic Topic name --producer.config Client installation directory/Kafka/kafka/config/producer.properties

    In this example, the Kafka topic name is kinput. Run the following command:

    sh kafka-console-producer.sh --broker-list 192.168.67.136:21007 --topic kinput --producer.config /opt/client/Kafka/kafka/config/producer.properties

    Add the following content to the topic:
    {"pid": "3","uid":false,"Int_16": "6533","Int_32": "429496294","Int_64": "1844674407370955614","String_x": "abc1","String_y": "abc1defghi","float_32": "0.1234","float_64": "95.1","Decimal_x": "0.451236414","Date_x": "2021-05-29","DateTime_x": "2021-05-21 10:05:10"}
    {"pid": "4","uid":false,"Int_16": "6533","Int_32": "429496294","Int_64": "1844674407370955614","String_x": "abc1","String_y": "abc1defghi","float_32": "0.1234","float_64": "95.1","Decimal_x": "0.4512314","Date_x": "2021-05-29","DateTime_x": "2021-05-21 10:05:10"}

    Press Enter to send the message.

    For more operations on the Kafka client, see Managing Messages in Kafka Topics.

  11. Connect to ClickHouse through the client by referring to 5, and run the query command to check whether data is written into the ClickHouse table.

    In this example, the ClickHouse table is test1_all.

    select * from test1_all;