Help Center/ MapReduce Service/ Component Operation Guide (LTS)/ Using Flink/ Creating a FlinkServer Job/ Creating a FlinkServer Job to Interconnect with a Hive Table
Updated on 2025-08-22 GMT+08:00

Creating a FlinkServer Job to Interconnect with a Hive Table

This section applies to MRS 3.1.2 or later clusters.

Scenarios

Currently, FlinkServer interconnects with Hive Metastore. Therefore, the metastore function must be enabled for Hive. Hive can be used as sink and dimension tables.

If your Kafka cluster is in security mode, the following example SQL statements can be used.

Prerequisites

  • Services such as HDFS, Yarn, Kafka, Flink, and Hive (the service name must be Hive) have been installed in the cluster.
  • The client that contains the Hive service has been installed, for example, in the /opt/client directory.
  • Flink 1.12.2 or later and Hive 3.1.0 or later are supported.
  • You have created a user assigned with the FlinkServer Admin Privilege (for example, flink_admin) for accessing the Flink web UI by referring to Creating a FlinkServer Role.
  • You have obtained the client configuration file and credential of the user for accessing the Flink web UI. For details, see "Note" in Procedure.

Creating a Job

Hive as a Sink table

The following uses the process of interconnecting a Kafka mapping table to Hive as an example.

  1. Log in to the Flink web UI as user flink_admin. For details, see Accessing the FlinkServer Web UI.
  2. Create a cluster connection, for example, flink_hive.

    1. Choose System Management > Cluster Connection Management. The Cluster Connection Management page is displayed.
    2. Click Create Cluster Connection. On the displayed page, enter information by referring to Table 1 and click Test. After the test is successful, click OK.
      Table 1 Parameters for creating a cluster connection

      Parameter

      Description

      Example Value

      Cluster Connection Name

      Name of the cluster connection, which can contain a maximum of 100 characters. Only letters, digits, and underscores (_) are allowed.

      flink_hive

      Description

      Description of the cluster connection name.

      -

      Version

      Select a cluster version.

      MRS 3

      Secure Version

      • If the secure version is used, select Yes for a security cluster. Enter the username and upload the user credential.
      • If not, select No.

      Yes

      Username

      The user must have the minimum permissions required for accessing services in the cluster. The name can contain a maximum of 100 characters. Only letters, digits, and underscores (_) are allowed.

      This parameter is available only when Secure Version is set to Yes.

      flink_admin

      Client Profile

      Client profile of the cluster, in TAR format.

      -

      User Credential

      User authentication credential, in TAR format, in FusionInsight Manager.

      This parameter is available only when Secure Version is set to Yes.

      Files can be uploaded only after the username is entered.

      User credential of flink_admin

  3. Create a Flink SQL job, for example, flinktest1.

    1. Click Job Management. The job management page is displayed.
    2. Click Create Job. On the displayed job creation page, set parameters by referring to Table 2 and click OK. The job development page is displayed.
      Table 2 Parameters for creating a job

      Parameter

      Description

      Example Value

      Type

      Job type, which can be Flink SQL or Flink Jar.

      Flink SQL

      Name

      Job name, which can contain a maximum of 64 characters. Only letters, digits, and underscores (_) are allowed.

      flinktest1

      Task Type

      Type of the job data source, which can be a stream job or a batch job.

      Stream job

      Description

      Job description, which can contain a maximum of 100 characters.

      -

  4. On the job development page, enter the following statements and click Check Semantic to check the entered content.

    CREATE TABLE test_kafka (
      user_id varchar,
      item_id varchar,
      cat_id varchar,
      zw_test timestamp
    ) WITH (
      'properties.bootstrap.servers' = 'Service IP address of the Kafka Broker instance:Kafka port',
      'format' = 'json',
      'topic' = 'zw_test_kafka',
      'connector' = 'kafka',
      'scan.startup.mode' = 'latest-offset',
      'properties.sasl.kerberos.service.name' = 'kafka',--This parameter is not required for clusters in normal mode. Delete the comma (,) in the previous line.
      'properties.security.protocol' = 'SASL_PLAINTEXT',--This parameter is not required for clusters in normal mode.
      'properties.kerberos.domain.name' = 'hadoop.System domain name'--This parameter is not required for clusters in normal mode.
    
    );
    CREATE CATALOG myhive WITH (
      'type' = 'hive',
      'hive-version' = '3.1.0',
      'default-database' = 'default',
      'cluster.name' = 'flink_hive'
    );
    use catalog myhive;
    set table.sql-dialect = hive;create table user_behavior_hive_tbl (
        user_id STRING,
        item_id STRING,
        cat_id STRING,
        ts timestamp
      ) PARTITIONED BY (dy STRING, ho STRING, mi STRING) stored as textfile TBLPROPERTIES (
        'partition.time-extractor.timestamp-pattern' = '$dy $ho:$mi:00',
        'sink.partition-commit.trigger' = 'process-time',
        'sink.partition-commit.delay' = '0S',
        'sink.partition-commit.policy.kind' = 'metastore,success-file'
      );
    INSERT into
      user_behavior_hive_tbl
    SELECT
      user_id,
      item_id,
      cat_id,
      zw_test,
      DATE_FORMAT(zw_test, 'yyyy-MM-dd'),
      DATE_FORMAT(zw_test, 'HH'),
      DATE_FORMAT(zw_test, 'mm')
    FROM
      default_catalog.default_database.test_kafka;
    • The Kafka Broker instance IP address and Kafka port number are as follows:
      • To obtain the instance IP address, log in to FusionInsight Manager, choose Cluster > Services > Kafka, click Instances, and query the instance IP address on the instance list page.
      • Value of sasl.port when Authentication Mode of the cluster is Security Mode, 21007 by default.
      • Value of port when Authentication Mode of the cluster is Normal Mode, 9092 by default. If the port number is set to 9092, set allow.everyone.if.no.acl.found to true. The procedure is as follows:

        Log in to FusionInsight Manager and choose Cluster > Services > Kafka. On the page that is displayed, click Configurations and then All Configurations. On the displayed page, search for allow.everyone.if.no.acl.found, set it to true, and click Save.

    • The value of 'cluster.name' is the name of the cluster connection created in 2.
    • System domain name: You can log in to FusionInsight Manager, choose System > Permission > Domain and Mutual Trust, and check the value of Local Domain.

  5. After the job is developed, in Basic Parameter, select Enable CheckPoint, set Time Interval(ms) to 60000, and retain the default value for Mode.
  6. Click Submit in the upper left corner to submit the job.
  7. After the job is successfully executed, choose More > Job Monitoring to view the job running details.
  8. Execute the following commands to view the topic and write data to Kafka. For details, see Managing Messages in Kafka Topics.

    Check the Kafka topic.

    ./kafka-topics.sh --list --bootstrap-server Service IP address of the Kafka Broker instance:Kafka port --command-config Client directory/Kafka/kafka/config/client.properties

    Write data to Kafka.

    sh kafka-console-producer.sh --broker-list IP address of the node where the Kafka instance is deployed:Kafka port --topic Topic name --producer.config Client directory/Kafka/kafka/config/producer.properties

    In this example, the topic name is zw_test_kafka.

    sh kafka-console-producer.sh --broker-list IP address of the node where the Kafka instance is deployed:Kafka port --topic zw_test_kafka --producer.config /opt/client/Kafka/kafka/config/producer.properties
    Enter the message content.
    {"user_id": "3","item_id":"333333","cat_id":"cat333","zw_test":"2021-09-08 09:08:01"}
    {"user_id": "4","item_id":"444444","cat_id":"cat444","zw_test":"2021-09-08 09:08:01"} 

    Press Enter to send the message.

  9. Run the following command to check whether data is written from the Hive table to the sink table:

    beeline
    select * from user_behavior_hive_tbl;

Hive as a dimension table

  1. Log in to the Hive client by referring to Using the Hive Client, create a Hive table, and insert data.

    CREATE TABLE hive3 ( id  int, name string ) PARTITIONED BY (dy STRING,ho STRING,mi STRING) 
    STORED AS textfile TBLPROPERTIES (
    'partition.time-extractor.timestamp-pattern'='$dy $ho:$mi:00',
    'streaming-source.partition.include'='all',
    'streaming-source.enable'='true',
    'streaming-source.partition.include'='all',
    'streaming-source.monitor-interval'='2m');
    
    insert into table hive3  values('1','aname','sss','name1','company1');
    insert into table hive3  values('2','bname','sss','name1','company1');

  2. Log in to the Flink web UI as user flink_admin. For details, see Accessing the FlinkServer Web UI.
  3. Create a cluster connection, for example, flink_hive1.

    1. Choose System Management > Cluster Connection Management. The Cluster Connection Management page is displayed.
    2. Click Create Cluster Connection. On the displayed page, enter information by referring to Table 3 and click Test. After the test is successful, click OK.
      Table 3 Parameters for creating a cluster connection

      Parameter

      Description

      Example Value

      Cluster Connection Name

      Name of the cluster connection, which can contain a maximum of 100 characters. Only letters, digits, and underscores (_) are allowed.

      flink_hive1

      Description

      Supplementary information about the cluster connection name.

      -

      Version

      Select a cluster version.

      MRS 3

      Secure Version

      • Select Yes for clusters in security mode. You need to enter the username and upload the user credential.
      • Select No for clusters in normal mode.

      Yes

      Username

      The user must have the minimum permissions required for accessing services in the cluster. The username can contain a maximum of 100 characters. Only letters, digits, and underscores (_) are allowed.

      This parameter is available only when Secure Version is set to Yes.

      flink_admin

      Client Profile

      Client profile of the cluster, in TAR format.

      -

      User Credential

      User authentication credential, in TAR format, in FusionInsight Manager.

      This parameter is available only when Secure Version is set to Yes.

      Files can be uploaded only after the username is entered.

      User credential of flink_admin

  4. Create a Flink SQL job, for example, flinktest2.

    1. Click Job Management. The job management page is displayed.
    2. Click Create Job. On the displayed job creation page, set parameters by referring to Table 4 and click OK. The job development page is displayed.
      Table 4 Parameters for creating a job

      Parameter

      Description

      Example Value

      Type

      Job type, which can be Flink SQL or Flink Jar.

      Flink SQL

      Name

      Job name, which can contain a maximum of 64 characters. Only letters, digits, and underscores (_) are allowed.

      flinktest2

      Task Type

      Type of the job data source, which can be a stream job or a batch job.

      Stream job

      Description

      Job description, which can contain a maximum of 100 characters.

      -

  5. On the job development page, enter the following statements and click Check Semantic to check the entered content.

    CREATE CATALOG myhive WITH (
      'type' = 'hive',
      'hive-version' = '3.1.0',
      'default-database' = 'default',
      'cluster.name' = 'flink_hive1'
    );
    use catalog myhive;
    set
      table.sql-dialect = hive;
    create table if not exists hive3 (id int, name string) stored as textfile TBLPROPERTIES (
      'streaming-source.enable' = 'false',
      'streaming-source.partition.include' = 'all',
      'lookup.join.cache.ttl' = '5 min'
    );
    use catalog default_catalog;
    set
      table.sql-dialect = default;
    CREATE TABLE kafka_source (
      id int, 
      address string, 
      proctime as PROCTIME()
    ) WITH (
      'connector' = 'kafka',
      'topic' = 'test_source',
      'properties.bootstrap.servers' = 'Service IP address of the Kafka Broker instance:Kafka port',
      'scan.startup.mode' = 'latest-offset',
      'value.format' = 'csv',
      'properties.sasl.kerberos.service.name' = 'kafka',--This parameter is not required for clusters in normal mode. Delete the comma (,) in the previous line.
      'properties.security.protocol' = 'SASL_PLAINTEXT',--This parameter is not required for clusters in normal mode.
      'properties.kerberos.domain.name' = 'hadoop.System domain name'--This parameter is not required for clusters in normal mode.
    );
    CREATE TABLE kafka_sink(
      id int,
      address string,
      name string
    ) WITH (
      'connector' = 'kafka',
      'topic' = 'test_sink',
      'properties.bootstrap.servers' = 'Service IP address of the Kafka Broker instance:Kafka port',
      'scan.startup.mode' = 'latest-offset',
      'value.format' = 'csv',
      'properties.sasl.kerberos.service.name' = 'kafka',--This parameter is not required for clusters in normal mode. Delete the comma (,) in the previous line.
      'properties.security.protocol' = 'SASL_PLAINTEXT',--This parameter is not required for clusters in normal mode.
      'properties.kerberos.domain.name' = 'hadoop.System domain name'--This parameter is not required for clusters in normal mode.
    );
    INSERT INTO
      default_catalog.default_database.kafka_sink
    select
      t1.id,
      t1.address,
      t2.name
    from
      default_catalog.default_database.kafka_source as t1
      join myhive.`default`.hive3 FOR SYSTEM_TIME AS OF t1.proctime AS t2 ON t1.id = t2.id;
    • The IP address and port number of the Kafka Broker instance are as follows:
      • To obtain the instance IP address, log in to FusionInsight Manager, choose Cluster > Services > Kafka, click Instances, and query the instance IP address on the instance list page.
      • If Kerberos authentication is enabled for the cluster (the cluster is in security mode), the Broker port number is the value of sasl.port. The default value is 21007.
      • If Kerberos authentication is disabled for the cluster (the cluster is in normal mode), the Broker port number is the value of port. The default value is 9092. If the port number is set to 9092, set allow.everyone.if.no.acl.found to true. The procedure is as follows:

        Log in to FusionInsight Manager and choose Cluster > Services > Kafka. On the page that is displayed, click Configurations and then All Configurations. On the displayed page, search for allow.everyone.if.no.acl.found, set it to true, and click Save.

    • The value of 'cluster.name' is the name of the cluster connection created in 3.
    • System domain name: You can log in to FusionInsight Manager, choose System > Permission > Domain and Mutual Trust, and check the value of Local Domain.

  6. After the job is developed, in Basic Parameter, select Enable CheckPoint, set Time Interval(ms) to 60000, and retain the default value for Mode.
  7. Click Submit in the upper left corner to submit the job.
  8. After the job is successfully executed, choose More > Job Monitoring to view the job running details.
  9. Execute the following commands to view the topic and write data to Kafka. For details, see Managing Messages in Kafka Topics.

    Check the Kafka topic.

    ./kafka-topics.sh --list --bootstrap-server Service IP address of the Kafka Broker instance:Kafka port --command-config Client directory/Kafka/kafka/config/client.properties

    Write data to Kafka.

    sh kafka-console-producer.sh --broker-list IP address of the node where the Kafka Broker instance is deployed:Kafka port --topic Topic name --producer.config Client directory/Kafka/kafka/config/producer.properties

    Enter the message content.
    1,city1
    2,city2

    Press Enter to send the message.

  10. Run the following command to check whether data is received in the sink table, that is, check whether data is properly written to the Kafka topic after 9 is performed. For details, see Managing Messages in Kafka Topics.

    sh kafka-console-consumer.sh --topic test_sink --bootstrap-server Service IP address of the Kafka broker instance:Kafka port --consumer.config /opt/client/Kafka/kafka/config/consumer.properties

    The command output is as follows:

    1,city1,aname
    2,city2,bname