Creating a FlinkServer Job to Interconnect with a ClickHouse Table (JDBC Connector)
This section applies to MRS 3.1.2 or later clusters.
Scenarios
Flink interconnects with the ClickHouseBalancer instance of ClickHouse to read and write data, preventing ClickHouse traffic distribution problems. The following table lists the mapping between Flink SQL and ClickHouse data types.

To meet security requirements, the password field in Flink SQL statements is left blank when displayed in the command output on the FlinkServer web UI in MRS 3.2.0 or later clusters. Before submitting a job, you must manually re-enter the password.
Flink SQL Data Type |
ClickHouse Data Type |
---|---|
BOOLEAN |
UInt8 |
TINYINT |
Int8 |
SMALLINT |
Int16 |
INTEGER |
Int32 |
BIGINT |
Int64 |
FLOAT |
Float32 |
DOUBLE |
Float64 |
CHAR |
String |
VARCHAR |
String |
VARBINARY |
FixedString |
DATE |
Date |
TIMESTAMP |
DateTime |
DECIMAL |
Decimal |
Prerequisites
- Services such as ClickHouse, HDFS, YARN, Flink, and Kafka have been installed in the cluster.
- The client has been installed, for example, in /opt/client.
Creating a Job
- Log in to the node where the client is installed as user root.
- Run the following command to go to the client installation directory:
cd /opt/client
- Run the following command to configure environment variables:
source bigdata_env
- If Kerberos authentication is enabled for the current cluster, run the following command to authenticate the current user. The current user must have the permission to create ClickHouse tables. If Kerberos authentication is disabled for the current cluster, skip this step:
kinit Component service user For example, kinit clickhouseuser
- Connect to the ClickHouse client. For details, see ClickHouse Client Practices. Commands containing authentication passwords pose security risks. Disable the command recording function (history) before running such commands to prevent information leakage.
- Normal mode:
clickhouse client --host IP address of the ClickHouse instance --user Login username --password 'Password' --port ClickHouse port number --multiline
- Security mode:
clickhouse client --host IP address of the ClickHouse instance --user Login username --password 'Password'--port ClickHouse port number --secure --multiline
- Normal mode:
- Run the following statements to create a replication table and a distributed table.
- Create a replication table default.test1.
CREATE TABLE default.test1 on cluster default_cluster ( `pid` Int8, `uid` UInt8, `Int_16` Int16, `Int_32` Int32, `Int_64` Int64, `String_x` String, `String_y` String, `float_32` Float32, `float_64` Float64, `Decimal_x` Decimal32(2), `Date_x` Date, `DateTime_x` DateTime ) ENGINE = ReplicatedReplacingMergeTree('/clickhouse/tables/{shard}/test1','{replica}') PARTITION BY pid ORDER BY (pid, DateTime_x);
- Create a distributed table test1_all.
CREATE TABLE test1_all ON CLUSTER default_cluster ( `pid` Int8, `uid` UInt8, `Int_16` Int16, `Int_32` Int32, `Int_64` Int64, `String_x` String, `String_y` String, `float_32` Float32, `float_64` Float64, `Decimal_x` Decimal32(2), `Date_x` Date, `DateTime_x` DateTime ) ENGINE = Distributed(default_cluster, default, test1, rand());
- Create a replication table default.test1.
- Log in to FusionInsight Manager and choose Cluster > Services > Flink. In the Basic Information area, click the link on the right of Flink WebUI to access the Flink web UI.
- Create a Flink SQL job and set Task Type to Stream job. For details, see Creating a Job. On the job development page, configure the job parameters as follows and start the job. In Basic Parameter, select Enable CheckPoint, set Time Interval(ms) to 60000, and retain the default value for Mode.
- If the current MRS cluster is in security mode, perform the following operations:
create table kafkasource( `pid` TINYINT, `uid` BOOLEAN, `Int_16` SMALLINT, `Int_32` INTEGER, `Int_64` BIGINT, `String_x` CHAR, `String_y` VARCHAR(10), `float_32` FLOAT, `float_64` DOUBLE, `Decimal_x` DECIMAL(9,2), `Date_x` DATE, `DateTime_x` TIMESTAMP ) with( 'connector' = 'kafka', 'topic' = 'input', 'properties.bootstrap.servers' = 'Service IP address of the Kafka Broker instance:Kafka port', 'properties.group.id' = 'group1', 'scan.startup.mode' = 'earliest-offset', 'format' = 'json', 'properties.sasl.kerberos.service.name' = 'kafka', 'properties.security.protocol' = 'SASL_PLAINTEXT', 'properties.kerberos.domain.name' = 'hadoop.System domain name' ); CREATE TABLE cksink ( `pid` TINYINT, `uid` BOOLEAN, `Int_16` SMALLINT, `Int_32` INTEGER, `Int_64` BIGINT, `String_x` CHAR, `String_y` VARCHAR(10), `float_32` FLOAT, `float_64` DOUBLE, `Decimal_x` DECIMAL(9,2), `Date_x` DATE, `DateTime_x` TIMESTAMP ) WITH ( 'connector' = 'jdbc', 'url' = 'jdbc:clickhouse://IP address 1 of the ClickHouseBalancer instance:ClickHouseBalancer port number,IP address 2 of the ClickHouseBalancer instance:ClickHouseBalancer port number/default?ssl=true&sslmode=none', 'username' = 'ClickHouse user. For details, see the note below.', 'password' = 'ClickHouse user password. For details, see the note below.', 'table-name' = 'test1_all', 'driver' = 'com.clickhouse.jdbc.ClickHouseDriver', 'sink.buffer-flush.max-rows' = '0', 'sink.buffer-flush.interval' = '60s' ); Insert into cksink select * from kafkasource;
- If the current MRS cluster is in normal mode, perform the following operations:
create table kafkasource( `pid` TINYINT, `uid` BOOLEAN, `Int_16` SMALLINT, `Int_32` INTEGER, `Int_64` BIGINT, `String_x` CHAR, `String_y` VARCHAR(10), `float_32` FLOAT, `float_64` DOUBLE, `Decimal_x` DECIMAL(9,2), `Date_x` DATE, `DateTime_x` TIMESTAMP ) with( 'connector' = 'kafka', 'topic' = 'kinput', 'properties.bootstrap.servers' = 'Service IP address of the Kafka Broker instance:Kafka port', 'properties.group.id' = 'kafka_test', 'scan.startup.mode' = 'earliest-offset', 'format' = 'json' ); CREATE TABLE cksink ( `pid` TINYINT, `uid` BOOLEAN, `Int_16` SMALLINT, `Int_32` INTEGER, `Int_64` BIGINT, `String_x` CHAR, `String_y` VARCHAR(10), `float_32` FLOAT, `float_64` DOUBLE, `Decimal_x` DECIMAL(9,2), `Date_x` DATE, `DateTime_x` TIMESTAMP ) WITH ( 'connector' = 'jdbc', 'url' = 'jdbc:clickhouse://IP address 1 of the ClickHouseBalancer instance:ClickHouseBalancer port number,IP address 2 of the ClickHouseBalancer instance:ClickHouseBalancer port number/default', 'table-name' = 'test1_all', 'username' = 'ClickHouse user. For details, see the note below.', 'password' = 'ClickHouse user password. For details, see the note below.', 'driver' = 'com.clickhouse.jdbc.ClickHouseDriver', 'sink.buffer-flush.max-rows' = '0', 'sink.buffer-flush.interval' = '60s' ); Insert into cksink select * from kafkasource;
- The user in the cksink table must have related permissions on the ClickHouse tables. For details, see Creating a User with ClickHouse Permissions.
- The Kafka Broker instance IP address and Kafka port number are as follows:
- To obtain the instance IP address, log in to FusionInsight Manager, choose Cluster > Services > Kafka, click Instances, and query the instance IP address on the instance list page.
- Value of sasl.port when Authentication Mode of the cluster is Security Mode. The default value is 21007.
- Value of port when Authentication Mode of the cluster is Normal Mode. The default value is 9092.
If the port number is set to 9092, set allow.everyone.if.no.acl.found to true. The procedure is as follows:
Log in to FusionInsight Manager and choose Cluster > Services > Kafka. On the page that is displayed, click the Configurations tab then the All Configurations sub-tab. On the displayed page, search for allow.everyone.if.no.acl.found, set it to true, and click Save.
- System domain name: You can log in to FusionInsight Manager, choose System > Permission > Domain and Mutual Trust, and check the value of Local Domain.
- The IP address and port number of the ClickHouseBalancer instance are as follows:
To obtain the instance IP address, log in to FusionInsight Manager, choose Cluster > Services > ClickHouse, click Instances, and query the instance IP address on the instance list page.
Select the ClickHouseBalancer port number based on the interconnected ClickHouse logical cluster. Log in to FusionInsight Manager, choose Cluster > Services > ClickHouse, click Logic Cluster, and view HTTP Balancer Port.You can configure multiple IP addresses for ClickHouseBalancer instances to avoid single points of failure (SPOFs) of the instances.
- DELETE messages generated during Flink computing are filtered out when data is written to ClickHouse.
- Parameters for batch write: Flink stores data in the memory and then flushes the data to the database table when the trigger condition is met. The configurations are as follows:
sink.buffer-flush.max-rows: Number of rows written to ClickHouse. The default value is 100.
sink.buffer-flush.interval: Interval for batch write. The default value is 1s.
If either of the two conditions is met, a sink operation is triggered. That is, data will be flushed to the database table.
- If the current MRS cluster is in security mode, perform the following operations:
- On the job management page, check whether the job status is Running.
- Execute the following script to write data to Kafka. For details, see Managing Messages in Kafka Topics.
sh kafka-console-producer.sh --broker-list IP address of the node where the Kafka instance is deployed:Kafka port --topicTopic name --producer.config Client directory/Kafka/kafka/config/producer.properties
In this example, the topic name is kinput.
sh kafka-console-producer.sh --broker-list IP address of the node where the Kafka instance is deployed:Kafka port --topic kinput --producer.config /opt/client/Kafka/kafka/config/producer.properties
Enter the message content.{"pid": "3","uid":false,"Int_16": "6533","Int_32": "429496294","Int_64": "1844674407370955614","String_x": "abc1","String_y": "abc1defghi","float_32": "0.1234","float_64": "95.1","Decimal_x": "0.451236414","Date_x": "2021-05-29","DateTime_x": "2021-05-21 10:05:10"} {"pid": "4","uid":false,"Int_16": "6533","Int_32": "429496294","Int_64": "1844674407370955614","String_x": "abc1","String_y": "abc1defghi","float_32": "0.1234","float_64": "95.1","Decimal_x": "0.4512314","Date_x": "2021-05-29","DateTime_x": "2021-05-21 10:05:10"}
Press Enter to send the message.
- Interconnect with ClickHouse to query the table data.
clickhouse client --host IP address of the ClickHouse instance --user Login username --password 'Password'--port ClickHouse port number --secure --multiline
Run the following command to check whether data is written to a specified ClickHouse table, for example, test1_all.
select * from test1_all;
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot