Interconnecting FlinkServer with Hudi
Scenario
This section describes how to interconnect FlinkServer with Hudi through Flink SQL jobs.
Prerequisites
- The HDFS, Yarn, Hive, Spark, Flink, and Kafka services have been installed in a cluster.
- The client that contains the Flink and Kafka services has been installed in a directory, for example, /opt/client.
- Flink 1.12.2 or later and Hudi 0.9.0 or later are required.
- You have created a user assigned with the FlinkServer Admin Privilege (for example, flink_admin) for accessing the Flink web UI by referring to Creating a FlinkServer Role. The user has been added to the hadoop, hive, and kafkaadmin user groups and granted the Manager_administrator role.
Flink Support for Read and Write Operations on Hudi Tables
Table 1 lists the read and write operations supported by Flink on Hudi COW and MOR tables.
Procedure
- Log in to Manager as user flink_admin and choose Cluster > Services > Flink. In the Basic Information area, click the link on the right of Flink WebUI to access the Flink web UI.
- Create a Flink SQL job by referring to Creating a Job. On the job development page, configure the job as follows: Enter the SQL statement. After the SQL statement passes the verification, start the job. The following SQL examples are added as three jobs and run in sequence.
In Basic Parameter, select Enable CheckPoint, set Time Interval(ms) to 60000, and retain the default value for Mode. This operation is required for all jobs.
Enable the fault recovery policy to improve job reliability. For example, set Failure Recovery Policy to fixed-delay, Retry Times to 3, and Retry Interval to 30. You can set the latter two parameters based on service requirements.
Wait until the job is started and its status is Running, choose More > Job Monitoring to go to the native UI of Flink, and view the job status.
- CheckPoint should be enabled on the Flink web UI because data is written to a Hudi table only when a Flink SQL job triggers CheckPoint. Adjust the CheckPoint interval based on service requirements. You are advised to set the interval to a large number.
- If the CheckPoint interval is too short, job exceptions may occur due to untimely data updates. It is recommended that the CheckPoint interval be configured at the minute level.
- Asynchronous compaction is required when a Flink SQL job writes an MOR table. For details about the parameter for controlling the compaction interval, visit Hudi official website https://hudi.apache.org/docs/configurations.html.
- By default, writing data to a Hudi table is to save Flink's state indexes to the backend. To use bucket indexes, add the following parameters to the Hudi table:
'index.type'='BUCKET', 'hoodie.bucket.index.num.buckets'='Number of buckets in each partition of a Hudi table' 'hoodie.bucket.index.hash.field'='recordkey.field'
- hoodie.bucket.index.num.buckets: Number of buckets in each partition of a Hudi table. Data in each partition is stored in each bucket in hash mode. This parameter cannot be modified after being set during table creation or data writing for the first time. Otherwise, an exception occurs during data update.
- hoodie.bucket.index.hash.field: Field for calculating the hash value during bucketing. The field must be a subset of the primary key. The default value is the primary key of the Hudi table. If this parameter is left blank, the default value recordkey.field is used.
- For a Hudi table, bucket indexes of Flink and Spark can be saved to the backend together.
- Job 1: This Flink SQL job writes data to an MOR table in streams.
CREATE TABLE stream_mor( uuid VARCHAR(20), name VARCHAR(10), age INT, ts INT, `p` VARCHAR(20) ) PARTITIONED BY (`p`) WITH ( 'connector' = 'hudi', 'path' = 'hdfs://hacluster/tmp/hudi/stream_mor', 'table.type' = 'MERGE_ON_READ', 'hoodie.datasource.write.recordkey.field' = 'uuid', 'write.precombine.field' = 'ts', 'write.tasks' = '4' ); CREATE TABLE kafka( uuid VARCHAR(20), name VARCHAR(10), age INT, ts INT, `p` VARCHAR(20) ) WITH ( 'connector' = 'kafka', 'topic' = 'writehudi', 'properties.bootstrap.servers' = 'IP address of the Kafka broker instance:Kafka port number', 'properties.group.id' = 'testGroup1', 'scan.startup.mode' = 'latest-offset', 'format' = 'json', 'properties.sasl.kerberos.service.name' = 'kafka',--This parameter is not required for clusters in normal mode. Delete the comma (,) in the previous line. 'properties.security.protocol' = 'SASL_PLAINTEXT',--This parameter is not required for clusters in normal mode. 'properties.kerberos.domain.name' = 'hadoop.System domain name'--This parameter is not required for clusters in normal mode. ); insert into stream_mor select * from kafka;
- Job 2: This Flink SQL job writes data to a COW table in streams.
CREATE TABLE stream_write_cow( uuid VARCHAR(20), name VARCHAR(10), age INT, ts INT, `p` VARCHAR(20) ) PARTITIONED BY (`p`) WITH ( 'connector' = 'hudi', 'path' = 'hdfs://hacluster/tmp/hudi/stream_cow', 'hoodie.datasource.write.recordkey.field' = 'uuid', 'write.precombine.field' = 'ts', 'write.tasks' = '4' ); CREATE TABLE kafka( uuid VARCHAR(20), name VARCHAR(10), age INT, ts INT, `p` VARCHAR(20) ) WITH ( 'connector' = 'kafka', 'topic' = 'writehudi', 'properties.bootstrap.servers' = 'IP address of the Kafka broker instance:Kafka port number', 'properties.group.id' = 'testGroup1', 'scan.startup.mode' = 'latest-offset', 'format' = 'json', 'properties.sasl.kerberos.service.name' = 'kafka',--This parameter is not required for clusters in normal mode. Delete the comma (,) in the previous line. 'properties.security.protocol' = 'SASL_PLAINTEXT',--This parameter is not required for clusters in normal mode. 'properties.kerberos.domain.name' = 'hadoop.System domain name'--This parameter is not required for clusters in normal mode. ); insert into stream_write_cow select * from kafka;
- Job 3: This Flink SQL job reads MOR and COW tables in streams, merges data, and outputs the merged data to Kafka. Verify the SQL statement of job 3 and start it after job 1 and job 2 are started and their status is running. Otherwise, an error message may be displayed during SQL verification, indicating that the Hudi table directory cannot be found.
CREATE TABLE stream_mor( uuid VARCHAR(20), name VARCHAR(10), age INT, ts INT, `p` VARCHAR(20) ) PARTITIONED BY (`p`) WITH ( 'connector' = 'hudi', 'path' = 'hdfs://hacluster/tmp/hudi/stream_mor', 'table.type' = 'MERGE_ON_READ', 'hoodie.datasource.write.recordkey.field' = 'uuid', 'write.precombine.field' = 'ts', 'read.tasks' = '4', 'read.streaming.enabled' = 'true', 'read.streaming.check-interval' = '5', 'read.streaming.start-commit' = 'earliest' ); CREATE TABLE stream_write_cow( uuid VARCHAR(20), name VARCHAR(10), age INT, ts INT, `p` VARCHAR(20) ) PARTITIONED BY (`p`) WITH ( 'connector' = 'hudi', 'path' = 'hdfs://hacluster/tmp/hudi/stream_cow', 'hoodie.datasource.write.recordkey.field' = 'uuid', 'write.precombine.field' = 'ts', 'read.tasks' = '4', 'read.streaming.enabled' = 'true', 'read.streaming.check-interval' = '5', 'read.streaming.start-commit' = 'earliest' ); CREATE TABLE kafka( uuid VARCHAR(20), name VARCHAR(10), age INT, ts INT, `p` VARCHAR(20) ) WITH ( 'connector' = 'kafka', 'topic' = 'readhudi', 'properties.bootstrap.servers' = 'IP address of the Kafka broker instance:Kafka port number', 'properties.group.id' = 'testGroup1', 'scan.startup.mode' = 'latest-offset', 'format' = 'json', 'properties.sasl.kerberos.service.name' = 'kafka',--This parameter is not required for clusters in normal mode. Delete the comma (,) in the previous line. 'properties.security.protocol' = 'SASL_PLAINTEXT',--This parameter is not required for clusters in normal mode. 'properties.kerberos.domain.name' = 'hadoop.System domain name'--This parameter is not required for clusters in normal mode. ); insert into kafka select * from stream_mor union all select * from stream_write_cow;
- The IP address and port number of the Kafka broker instance are as follows:
- To obtain the instance IP address, log in to FusionInsight Manager, choose Cluster > Services > Kafka, click Instance, and query the instance IP address on the instance list page.
- If Kerberos authentication is enabled for the cluster (the cluster is in security mode), the Broker port number is the value of sasl.port. The default value is 21007.
- If Kerberos authentication is disabled for the cluster (the cluster is in normal mode), the broker port number is the value of port. The default value is 9092. If the port number is set to 9092, set allow.everyone.if.no.acl.found to true. The procedure is as follows:
Log in to FusionInsight Manager and choose Cluster > Services > Kafka. Click Configurations then All Configurations. On the page that is displayed, search for allow.everyone.if.no.acl.found, set it to true, and click Save.
- System domain name: You can log in to FusionInsight Manager, choose System > Permission > Domain and Mutual Trust, and check the value of Local Domain.
- Execute the following script to write data to Kafka. For details, see Managing Messages in Kafka Topics.
sh kafka-console-producer.sh --broker-list IP address of the node where Kafka instances reside:Kafka port number --topic Topic name --producer.config Client directory/Kafka/kafka/config/producer.properties
In this example, the topic name is writehudi.
sh kafka-console-producer.sh --broker-list IP address of the node where the Kafka instance is located:Kafka port number ----topic writehudi --producer.config /opt/client/Kafka/kafka/config/producer.properties
Enter the message content.{"uuid": "1","name":"a01","age":10,"ts":10,"p":"1"} {"uuid": "2","name":"a02","age":20,"ts":20,"p":"2"}
Press Enter to send the message.
- Consumes Kafka topic data and reads the result of reading the Hudi table from Flink streams.
sh kafka-console-consumer.sh --bootstrap-server IP address of the node where Kafka instances reside:Kafka port number --topic Topic name --consumer.config Client directory/Kafka/kafka/config/consumer.properties --from-beginning
In this example, the topic name is readhudi.
sh kafka-console-consumer.sh --bootstrap-server IP address of the Kafka role instance:Kafka port --topic readhudi --consumer.config /opt/client/Kafka/kafka/config/consumer.properties --from-beginning
The read result is as follows (the sequence is not fixed):
{"uuid": "1","name":"a01","age":10,"ts":10,"p":"1"} {"uuid": "2","name":"a02","age":20,"ts":20,"p":"2"} {"uuid": "1","name":"a01","age":10,"ts":10,"p":"1"} {"uuid": "2","name":"a02","age":20,"ts":20,"p":"2"}
WITH Parameters
Mode |
Parameter |
Mandatory |
Default Value |
Description |
---|---|---|---|---|
Read |
read.tasks |
No |
4 |
Parallelism of the tasks for reading the Hudi table. |
read.streaming.enabled |
No |
false |
Whether to enable stream read. |
|
read.streaming.start-commit |
No |
By default, data is read from the latest commit. |
Start position (closed interval) of incremental stream and batch consumption in yyyyMMddHHmmss format. |
|
read.end-commit |
No |
By default, data is read to the latest commit. |
End position (closed interval) of incremental stream and batch consumption in yyyyMMddHHmmss format. |
|
Write |
write.tasks |
No |
4 |
Parallelism of the tasks for reading data from the Hudi table. |
index.bootstrap.enabled |
No |
false |
Whether to enable index loading. If it is enabled, the latest data in the stored table is loaded to the state at a time. If incremental data needs to be synchronized to full data and there are offline Hoodie tables available, you can enable the index loading function to write data in real time and ensure that data is unique. |
|
write.index_bootstrap.tasks |
No |
4 |
If indexes are loaded slowly when a job is started, you can choose a larger value for this parameter. After that, the efficiency is improved, but checkpoints are blocked in the bootstrap phase. |
|
compaction.async.enabled |
No |
true |
Whether to enable online compaction |
|
compaction.schedule.enabled |
No |
true |
Whether to generate a compression plan periodically. You are advised to enable this function even if online compaction is disabled. |
|
compaction.tasks |
No |
10 |
Parallelism of the tasks for compacting data in the Hudi table. |
|
index.state.ttl |
No |
7D |
Duration for storing indexes. The default value is 7 days. If the value is less than 0, indexes are stored permanently. Indexes are the core data structure for determining whether data is duplicate. For long-time updates, for example, updating data generated one month ago, you need to increase the value of this parameter. |
Synchronizing Metadata from Flink On Hudi to Hive
After this feature is enabled, Flink automatically creates a Hudi table on Hive and adds partitions to the Hudi table when writing data to it. Then services such as SparkSQL and Hive can read data form the Hudi table.
The metadata can be synchronized with either of the following methods. The JDBC mode is used as an example in the following steps.
- Synchronizing metadata to Hive in JDBC mode
CREATE TABLE stream_mor( uuid VARCHAR(20), name VARCHAR(10), age INT, ts INT, `p` VARCHAR(20) ) PARTITIONED BY (`p`) WITH ( 'connector' = 'hudi', 'path' = 'hdfs://hacluster/tmp/hudi/stream_mor', 'table.type' = 'MERGE_ON_READ', 'hive_sync.enable' = 'true', 'hive_sync.table' = 'Name of the table to be synchronized to Hive', 'hive_sync.db' = 'Name of the database to be synchronized to Hive', 'hive_sync.metastore.uris' = 'Value of hive.metastore.uris in the hive-site.xml file on the Hive client', 'hive_sync.jdbc_url' = 'Value of CLIENT_HIVE_URI in the component_env file on the Hive client' );
- hive_sync.jdbc_url: If the value of CLIENT_HIVE_URI contains \, delete \.
- To use the Hive style partitioning, add the following parameters:
- 'hoodie.datasource.write.hive_style_partitioning' = 'true'
- 'hive_sync.partition_extractor_class' = 'org.apache.hudi.hive.MultiPartKeysValueExtractor'
- Flink on Hudi synchronizes data to Hive. Hudi is case sensitive, while Hive is case insensitive. You are not advised to use uppercase letters in fields of Hudi tables. Otherwise, data may fail to be read or written.
- Synchronizing metadata to Hive in HMS mode
CREATE TABLE stream_mor( uuid VARCHAR(20), name VARCHAR(10), age INT, ts INT, `p` VARCHAR(20) ) PARTITIONED BY (`p`) WITH ( 'connector' = 'hudi', 'path' = 'hdfs://hacluster/tmp/hudi/stream_mor', 'table.type' = 'MERGE_ON_READ', 'hive_sync.enable' = 'true', 'hive_sync.table' = 'Name of the table to be synchronized to Hive', 'hive_sync.db' = 'Name of the database to be synchronized to Hive', 'hive_sync.mode' = 'hms', 'hive_sync.metastore.uris' = 'Value of hive.metastore.uris in the hive-site.xml file on the Hive client', 'properties.hive.metastore.kerberos.principal' = 'Value of hive.metastore.kerberos.principal in the hive-site.xml file on the Hive client' );
- Log in to Manager as user flink_admin and choose Cluster > Services > Flink. In the Basic Information area, click the link on the right of Flink WebUI to access the Flink web UI.
- Create a Flink SQL job by referring to Creating a Job. On the job development page, configure the job. Enter the SQL statement. After the SQL statement passes the verification, start the job.
In Basic Parameter, select Enable CheckPoint, set Time Interval(ms) to 60000, and retain the default value for Mode.
CREATE TABLE stream_mor2( uuid VARCHAR(20), name VARCHAR(10), age INT, ts INT, `p` VARCHAR(20) ) PARTITIONED BY (`p`) WITH ( 'connector' = 'hudi', 'path' = 'hdfs://hacluster/tmp/hudi/stream_mor2', 'table.type' = 'MERGE_ON_READ', 'hoodie.datasource.write.recordkey.field' = 'uuid', 'write.precombine.field' = 'ts', 'write.tasks' = '4', 'hive_sync.enable' = 'true', 'hive_sync.table' = 'Name of the table to be synchronized to Hive, for example, stream_mor2', 'hive_sync.db' = 'Name of the database to be synchronized to Hive, for example, default', 'hive_sync.metastore.uris' = 'Value of hive.metastore.uris in the hive-site.xml file on the Hive client', 'hive_sync.jdbc_url' = 'Value of CLIENT_HIVE_URI in the component_env file on the Hive client' ); CREATE TABLE datagen ( uuid varchar(20), name varchar(10), age int, ts INT, p varchar(20) ) WITH ( 'connector' = 'datagen', 'rows-per-second' = '1', 'fields.p.length' = '1' );insert into stream_mor2 select * from datagen;
- Wait for the Flink job to run for a period of time and continuously write the random test data generated by datagen to the Hudi table. You can click More > Job Monitoring to go to the native UI of Flink and view the job status.
- Log in to the node where the client is deployed, load environment variables, run the beeline command to log in to the Hive client, and run SQL statements to check whether the Hudi Sink table is successfully created on Hive and whether data can be read from the table.
cd /opt/hadoopclient
source bigdata_env
beeline
desc formatted default.stream_mor2;
select * from default.stream_mor2 limit 5;
show partitions default.stream_mor2;
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot