Interconnecting FlinkServer with Hive
Scenario
Currently, FlinkServer interconnects with Hive MetaStore. Therefore, the MetaStore function must be enabled for Hive. Hive can be used as source, sink, and dimension tables.
If your Kafka cluster is in security mode, the following example SQL statements can be used.
Prerequisites
- Services such as HDFS, Yarn, Kafka, Flink, and Hive have been installed in the cluster.
- The client that contains the Hive service has been installed, for example, in the /opt/Bigdata/client directory.
- Flink 1.12.2 or later and Hive 3.1.0 or later are supported.
- You have created a user with FlinkServer Admin Privilege, for example, flink_admin, to access the Flink web UI. For details, see Authentication Based on Users and Roles.
- You have obtained the client configuration file and credential of the user for accessing the Flink web UI. For details, see "Note" in Creating a Cluster Connection.
Procedure
The following uses the process of interconnecting a Kafka mapping table to Hive as an example.
- Log in to the Flink web UI as user flink_admin. For details, see Accessing the Flink Web UI.
- Create a cluster connection, for example, flink_hive.
- Choose System Management > Cluster Connection Management. The Cluster Connection Management page is displayed.
- Click Create Cluster Connection. On the displayed page, enter information by referring to Table 1 and click Test. After the test is successful, click OK.
Table 1 Parameters for creating a cluster connection Parameter
Description
Example Value
Cluster Connection Name
Name of the cluster connection, which can contain a maximum of 100 characters. Only letters, digits, and underscores (_) are allowed.
flink_hive
Description
Description of the cluster connection name.
-
Version
Select a cluster version.
MRS 3
Secure Version
- If the secure version is used, select Yes for a security cluster. Enter the username and upload the user credential.
- If not, select No.
Yes
Username
The user must have the minimum permissions for accessing services in the cluster. The name can contain a maximum of 100 characters. Only letters, digits, and underscores (_) are allowed.
This parameter is available only when Secure Version is set to Yes.
flink_admin
Client Profile
Client profile of the cluster, in TAR format.
-
User Credential
User authentication credential in FusionInsight Manager in TAR format.
This parameter is available only when Secure Version is set to Yes.
Files can be uploaded only after the username is entered.
User credential of flink_admin
- Create a Flink SQL job, for example, flinktest1.
- Click Job Management. The job management page is displayed.
- Click Create Job. On the displayed job creation page, set parameters by referring to Table 2 and click OK. The job development page is displayed.
Table 2 Parameters for creating a job Parameter
Description
Example Value
Type
Job type, which can be Flink SQL or Flink Jar.
Flink SQL
Name
Job name, which can contain a maximum of 64 characters. Only letters, digits, and underscores (_) are allowed.
flinktest1
Task Type
Type of the job data source, which can be a stream job or a batch job.
Stream job
Description
Job description, which can contain a maximum of 100 characters.
-
- On the job development page, enter the following statements and click Check Semantic to check the input content.
CREATE TABLE test_kafka ( user_id varchar, item_id varchar, cat_id varchar, zw_test timestamp ) WITH ( 'properties.bootstrap.servers' = 'IP address of the Kafka broker instance:Kafka port number', 'format' = 'json', 'topic' = 'zw_tset_kafka', 'connector' = 'kafka', 'scan.startup.mode' = 'latest-offset', 'properties.sasl.kerberos.service.name' = 'kafka', 'properties.security.protocol' = 'SASL_PLAINTEXT', 'properties.kerberos.domain.name' = 'hadoop.System domain name' ); CREATE CATALOG myhive WITH ( 'type' = 'hive', 'hive-version' = '3.1.0', 'default-database' = 'default', 'cluster.name' = 'flink_hive' ); use catalog myhive; set table.sql-dialect = hive;create table user_behavior_hive_tbl_no_partition ( user_id STRING, item_id STRING, cat_id STRING, ts timestamp ) PARTITIONED BY (dy STRING, ho STRING, mi STRING) stored as textfile TBLPROPERTIES ( 'partition.time-extractor.timestamp-pattern' = '$dy $ho:$mi:00', 'sink.partition-commit.trigger' = 'process-time', 'sink.partition-commit.delay' = '0S', 'sink.partition-commit.policy.kind' = 'metastore,success-file' ); INSERT into user_behavior_hive_tbl_no_partition SELECT user_id, item_id, cat_id, zw_test, DATE_FORMAT(zw_test, 'yyyy-MM-dd'), DATE_FORMAT(zw_test, 'HH'), DATE_FORMAT(zw_test, 'mm') FROM default_catalog.default_database.test_kafka;
- The value of 'cluster.name' is the name of the cluster connection created in 2.
- System domain name: You can log in to FusionInsight Manager, choose System > Permission > Domain and Mutual Trust, and check the value of Local Domain. hadoop.<System domain name> indicates the username. All letters in the system domain name are lowercase letters.
- After the job is developed, select Enable CheckPoint in Running Parameter and set Time Interval (ms) to 60000.
- Click Submit in the upper left corner to submit the job.
- After the job is successfully executed, choose More > Job Monitoring to view the job running details.
- Execute the following commands to view the topic and write data to Kafka. For details, see Managing Messages in Kafka Topics.
./kafka-topics.sh --list --zookeeper IP address of the ZooKeeper quorumpeer instance:ZooKeeper port number/kafka
sh kafka-console-producer.sh --broker-list IP address of the node where the Kafka instance locates:Kafka port number --topic Topic name --producer.config Client directory/Kafka/kafka/config/producer.properties
For example, if the topic name is zw_tset_kafka, the script is sh kafka-console-producer.sh --broker-list IP address of the node where the Kafka instance locates:Kafka port number --topic zw_tset_kafka --producer.config /opt/Bigdata/client/Kafka/kafka/config/producer.properties.
Enter the message content.{"user_id": "3","item_id":"333333","cat_id":"cat333","zw_test":"2021-09-08 09:08:01"} {"user_id": "4","item_id":"444444","cat_id":"cat444","zw_test":"2021-09-08 09:08:01"}
Press Enter to send the message.
- IP address of the ZooKeeper quorumpeer instance
To obtain IP addresses of all ZooKeeper quorumpeer instances, log in to FusionInsight Manager and choose Cluster > Services > ZooKeeper. On the displayed page, click Instance and view the IP addresses of all the hosts where the quorumpeer instances locate.
- Port number of the ZooKeeper client
Log in to FusionInsight Manager and choose Cluster > Service > ZooKeeper. On the displayed page, click Configurations and check the value of clientPort.
- IP address of the ZooKeeper quorumpeer instance
- Run the following command to check whether data is written from the Hive table to the sink table:
beeline
select * from user_behavior_hive_tbl_no_partition;
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.