Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive
On this page

Interconnecting FlinkServer with Hive

Updated on 2024-11-29 GMT+08:00

Scenario

Currently, FlinkServer interconnects with Hive MetaStore. Therefore, the MetaStore function must be enabled for Hive. Hive can be used as source, sink, and dimension tables.

Kafka in security mode is used as an example.

Prerequisites

  • Services such as HDFS, Yarn, Kafka, Flink, and Hive have been installed in the cluster.
  • The client that contains the Hive service has been installed in a directory, for example, /opt/client.
  • Flink 1.12.2 or later and Hive 3.1.0 or later are supported.
  • You have created a user assigned with the FlinkServer Admin Privilege (for example, flink_admin) for accessing the Flink web UI by referring to Creating a FlinkServer Role.
  • You have obtained the client configuration file and credential of the user for accessing the Flink web UI. For details, see "Note" in Creating a Cluster Connection.

Procedure

The following uses the process of interconnecting a Kafka mapping table to Hive as an example.

  1. Log in to the Flink web UI as user flink_admin. For details, see Accessing the Flink Web UI.
  2. Create a cluster connection, for example, flink_hive.

    1. Choose System Management > Cluster Connection Management. The Cluster Connection Management page is displayed.
    2. Click Create Cluster Connection. On the displayed page, enter information by referring to Table 1 and click Test. After the test is successful, click OK.
      Table 1 Parameters for creating a cluster connection

      Parameter

      Description

      Example Value

      Cluster Connection Name

      Name of the cluster connection, which can contain a maximum of 100 characters. Only letters, digits, and underscores (_) are allowed.

      flink_hive

      Description

      Description of the cluster connection name.

      -

      Version

      Select a cluster version.

      MRS 3

      Secure Version

      • If the secure version is used, select Yes for a security cluster. Enter the username and upload the user credential.
      • If not, select No.

      Yes

      Username

      The user must have the minimum permissions for accessing services in the cluster. The name can contain a maximum of 100 characters. Only letters, digits, and underscores (_) are allowed.

      This parameter is available only when Secure Version is set to Yes.

      flink_admin

      Client Profile

      Client profile of the cluster, in TAR format.

      -

      User Credential

      User authentication credential in FusionInsight Manager in TAR format.

      This parameter is available only when Secure Version is set to Yes.

      Files can be uploaded only after the username is entered.

      User credential of flink_admin

  3. Create a Flink SQL job, for example, flinktest1.

    1. Click Job Management. The job management page is displayed.
    2. Click Create Job. On the displayed job creation page, set parameters by referring to Table 2 and click OK. The job development page is displayed.
      Table 2 Parameters for creating a job

      Parameter

      Description

      Example Value

      Type

      Job type, which can be Flink SQL or Flink Jar.

      Flink SQL

      Name

      Job name, which can contain a maximum of 64 characters. Only letters, digits, and underscores (_) are allowed.

      flinktest1

      Task Type

      Type of the job data source, which can be a stream job or a batch job.

      Stream job

      Description

      Job description, which can contain a maximum of 100 characters.

      -

  4. On the job development page, enter the following statements and click Check Semantic to check the input content.

    CREATE TABLE test_kafka (
      user_id varchar,
      item_id varchar,
      cat_id varchar,
      zw_test timestamp
    ) WITH (
      'properties.bootstrap.servers' = 'IP address of the Kafka broker instance:Kafka port number',
      'format' = 'json',
      'topic' = 'zw_tset_kafka',
      'connector' = 'kafka',
      'scan.startup.mode' = 'latest-offset',
      'properties.sasl.kerberos.service.name' = 'kafka',
      'properties.security.protocol' = 'SASL_PLAINTEXT',
      'properties.kerberos.domain.name' = 'hadoop.System domain name'
    
    );
    CREATE CATALOG myhive WITH (
      'type' = 'hive',
      'hive-version' = '3.1.0',
      'default-database' = 'default',
      'cluster.name' = 'flink_hive'
    );
    use catalog myhive;
    set table.sql-dialect = hive;create table user_behavior_hive_tbl_no_partition (
        user_id STRING,
        item_id STRING,
        cat_id STRING,
        ts timestamp
      ) PARTITIONED BY (dy STRING, ho STRING, mi STRING) stored as textfile TBLPROPERTIES (
        'partition.time-extractor.timestamp-pattern' = '$dy $ho:$mi:00',
        'sink.partition-commit.trigger' = 'process-time',
        'sink.partition-commit.delay' = '0S',
        'sink.partition-commit.policy.kind' = 'metastore,success-file'
      );
    INSERT into
      user_behavior_hive_tbl_no_partition
    SELECT
      user_id,
      item_id,
      cat_id,
      zw_test,
      DATE_FORMAT(zw_test, 'yyyy-MM-dd'),
      DATE_FORMAT(zw_test, 'HH'),
      DATE_FORMAT(zw_test, 'mm')
    FROM
      default_catalog.default_database.test_kafka;
    NOTE:
    • The IP address and port number of the Kafka broker instance are as follows:
      • To obtain the instance IP address, log in to FusionInsight Manager, choose Cluster > Services > Kafka, click Instance, and query the instance IP address on the instance list page.
      • If Kerberos authentication is enabled for the cluster (the cluster is in security mode), the Broker port number is the value of sasl.port. The default value is 21007.
      • If Kerberos authentication is disabled for the cluster (the cluster is in normal mode), the broker port number is the value of port. The default value is 9092. If the port number is set to 9092, set allow.everyone.if.no.acl.found to true. The procedure is as follows:

        Log in to FusionInsight Manager and choose Cluster > Services > Kafka. Click Configurations then All Configurations. On the page that is displayed, search for allow.everyone.if.no.acl.found, set it to true, and click Save.

    • The value of 'cluster.name' is the name of the cluster connection created in 2.
    • System domain name: You can log in to FusionInsight Manager, choose System > Permission > Domain and Mutual Trust, and check the value of Local Domain.

  5. After the job is developed, in Basic Parameter, select Enable CheckPoint, set Time Interval(ms) to 60000, and retain the default value for Mode.
  6. Click Submit in the upper left corner to submit the job.
  7. After the job is successfully executed, choose More > Job Monitoring to view the job running details.
  8. Execute the following commands to view the topic and write data to Kafka. For details, see Managing Messages in Kafka Topics.

    ./kafka-topics.sh --list --zookeeper IP address of the ZooKeeper quorumpeer instance:ZooKeeper port number/kafka

    sh kafka-console-producer.sh --broker-list IP address of the node where Kafka instances reside:Kafka port number --topic Topic name --producer.config Client directory/Kafka/kafka/config/producer.properties

    For example, if the topic name is zw_tset_kafka, the script is sh kafka-console-producer.sh --broker-list IP address of the node where the Kafka instance is located:Kafka port number --topic zw_tset_kafka --producer.config /opt/client/Kafka/kafka/config/producer.properties

    Enter the message content.
    {"user_id": "3","item_id":"333333","cat_id":"cat333","zw_test":"2021-09-08 09:08:01"}
    {"user_id": "4","item_id":"444444","cat_id":"cat444","zw_test":"2021-09-08 09:08:01"} 

    Press Enter to send the message.

    NOTE:
    • IP address of the ZooKeeper quorumpeer instance

      To obtain IP addresses of all ZooKeeper quorumpeer instances, log in to FusionInsight Manager and choose Cluster > Services > ZooKeeper. On the displayed page, click Instance and view the IP addresses of all the hosts where the quorumpeer instances locate.

    • Port number of the ZooKeeper client

      Log in to FusionInsight Manager and choose Cluster > Service > ZooKeeper. On the displayed page, click Configurations and check the value of clientPort.

  9. Run the following command to check whether data is written from the Hive table to the sink table:

    beeline

    select * from user_behavior_hive_tbl_no_partition;

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback