Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive
Help Center/ MapReduce Service/ Component Operation Guide (LTS)/ Using Flink/ Creating a FlinkServer Job/ Creating a FlinkServer Job to Write Data to a Hudi Table

Creating a FlinkServer Job to Write Data to a Hudi Table

Updated on 2024-12-13 GMT+08:00

This section applies to MRS 3.1.2 or later clusters.

Scenario

This section describes how to interconnect FlinkServer with Hudi through Flink SQL jobs. When you use Flink SQL to read data from or write data to Hudi, the TINYINT, SMALLINT, and TIME types cannot be defined.

Table 1 lists the read and write operations supported by Flink on Hudi COW and MOR tables.

Table 1 Flink SQL read and write operations on Hudi tables

Flink SQL

COW Table

MOR Table

Batch write

Supported

Supported

Batch read

Supported

Supported

Stream write

Supported

Supported

Stream read

Supported

Supported

Prerequisites

  • The HDFS, Yarn, Flink, and Hudi services have been installed in a cluster.
  • The client that contains the Hudi service has been installed, for example, in the /opt/client directory.
  • Flink 1.12.2 or later and Hudi 0.9.0 or later are required.
  • You have created a user assigned with the FlinkServer Admin Privilege (for example, flink_admin) for accessing the Flink web UI by referring to Creating a FlinkServer Role. The user has been added to the hadoop, hive, and kafkaadmin user groups and granted the Manager_administrator role.

Creating a Job

  1. Log in to Manager as user flink_admin and choose Cluster > Services > Flink. In the Basic Information area, click the link on the right of Flink WebUI to access the Flink web UI.
  2. Create a Flink SQL job by referring to Creating a FlinkServer Job. On the job development page, configure the job as follows: Enter the SQL statement. After the SQL statement passes the verification, start the job. The following SQL examples are added as three jobs and run in sequence.

    In Basic Parameter, select Enable CheckPoint, set Time Interval(ms) to 60000, and retain the default value for Mode.

    NOTE:
    • CheckPoint should be enabled on the Flink web UI because data is written to a Hudi table only when a Flink SQL job triggers CheckPoint. Adjust the CheckPoint interval based on service requirements. You are advised to set the interval to a large number.
    • If the CheckPoint interval is too short, job exceptions may occur due to untimely data updates. It is recommended that the CheckPoint interval be configured at the minute level.
    • Asynchronous compaction is required when a Flink SQL job writes an MOR table. For details about the parameter for controlling the compaction interval, visit Hudi official website https://hudi.apache.org/docs/configurations.html.
    • The following content is available in MRS 3.2.1 or later. By default, Hudi table write statements use Flink status index. To use the bucket index, add the following parameters to the statement:
      'index.type'='BUCKET',
      'hoodie.bucket.index.num.buckets'='Number of buckets in each partition of a Hudi table'
      'hoodie.bucket.index.hash.field'='recordkey.field'
      • hoodie.bucket.index.num.buckets: Number of buckets in each partition of a Hudi table. Data in each partition is stored in each bucket in hash mode. This parameter cannot be modified after being set during table creation or data writing for the first time. Otherwise, an exception occurs during data update.
      • hoodie.bucket.index.hash.field: Field for calculating the hash value during bucketing. The field must be a subset of the primary key. The default value is the primary key of the Hudi table. If this parameter is left blank, the default value recordkey.field is used.
    • The following content is available in MRS 3.2.1 or later. The same Hudi table can be written by the bucket indexes across the Flink and Spark engines.
    1. Job 1: This Flink SQL job writes data to an MOR table in streams.
      CREATE TABLE stream_mor(
      uuid VARCHAR(20),
      name VARCHAR(10),
      age INT,
      ts INT,
      `p` VARCHAR(20)
      ) PARTITIONED BY (`p`) WITH (
      'connector' = 'hudi',
      'path' = 'hdfs://hacluster/tmp/hudi/stream_mor',
      'table.type' = 'MERGE_ON_READ',
      'hoodie.datasource.write.recordkey.field' = 'uuid',
      'write.precombine.field' = 'ts',
      'write.tasks' = '4'
      );
      
      CREATE TABLE kafka(
      uuid VARCHAR(20),
      name VARCHAR(10),
      age INT,
      ts INT,
      `p` VARCHAR(20)
      ) WITH (
      'connector' = 'kafka',
      'topic' = 'writehudi',
      'properties.bootstrap.servers' = 'IP address of the Kafka broker instance:Kafka port number',
      'properties.group.id' = 'testGroup1',
      'scan.startup.mode' = 'latest-offset',
      'format' = 'json',
      'properties.sasl.kerberos.service.name' = 'kafka',--This parameter is not required for clusters in normal mode. Delete the comma (,) in the previous line.
      'properties.security.protocol' = 'SASL_PLAINTEXT',--This parameter is not required for clusters in normal mode.
      'properties.kerberos.domain.name' = 'hadoop.System domain name'--This parameter is not required for clusters in normal mode.
      );
      
      insert into
      stream_mor
      select
      *
      from
      kafka;
    2. Job 2: This Flink SQL job writes data to a COW table in streams.
      CREATE TABLE stream_write_cow(
      uuid VARCHAR(20),
      name VARCHAR(10),
      age INT,
      ts INT,
      `p` VARCHAR(20)
      ) PARTITIONED BY (`p`) WITH (
      'connector' = 'hudi',
      'path' = 'hdfs://hacluster/tmp/hudi/stream_cow',
      'hoodie.datasource.write.recordkey.field' = 'uuid',
      'write.precombine.field' = 'ts',
      'write.tasks' = '4'
      );
      
      CREATE TABLE kafka(
      uuid VARCHAR(20),
      name VARCHAR(10),
      age INT,
      ts INT,
      `p` VARCHAR(20)
      ) WITH (
      'connector' = 'kafka',
      'topic' = 'writehudi',
      'properties.bootstrap.servers' = 'IP address of the Kafka broker instance:Kafka port number',
      'properties.group.id' = 'testGroup1',
      'scan.startup.mode' = 'latest-offset',
      'format' = 'json',
      'properties.sasl.kerberos.service.name' = 'kafka',--This parameter is not required for clusters in normal mode. Delete the comma (,) in the previous line.
      'properties.security.protocol' = 'SASL_PLAINTEXT',--This parameter is not required for clusters in normal mode.
      'properties.kerberos.domain.name' = 'hadoop.System domain name'--This parameter is not required for clusters in normal mode.
      );
      
      insert into
      stream_write_cow
      select
      *
      from
      kafka;
    3. Job 3: This Flink SQL job reads MOR and COW tables in streams, merges data, and outputs the merged data to Kafka. Verify the SQL statement of job 3 and start it after job 1 and job 2 are started and their status is running. Otherwise, an error message may be displayed during SQL verification, indicating that the Hudi table directory cannot be found.
      CREATE TABLE stream_mor(
      uuid VARCHAR(20),
      name VARCHAR(10),
      age INT,
      ts INT,
      `p` VARCHAR(20)
      ) PARTITIONED BY (`p`) WITH (
      'connector' = 'hudi',  
      'path' = 'hdfs://hacluster/tmp/hudi/stream_mor',  
      'table.type' = 'MERGE_ON_READ',
      'hoodie.datasource.write.recordkey.field' = 'uuid',
      'write.precombine.field' = 'ts',
      'read.tasks' = '4',
      'read.streaming.enabled' = 'true',
      'read.streaming.check-interval' = '5',
      'read.streaming.start-commit' = 'earliest'
      );
      CREATE TABLE stream_write_cow(
      uuid VARCHAR(20),
      name VARCHAR(10),
      age INT,
      ts INT,
      `p` VARCHAR(20)
      ) PARTITIONED BY (`p`) WITH (
      'connector' = 'hudi',
      'path' = 'hdfs://hacluster/tmp/hudi/stream_cow',
      'hoodie.datasource.write.recordkey.field' = 'uuid',
      'write.precombine.field' = 'ts',
      'read.tasks' = '4',
      'read.streaming.enabled' = 'true',
      'read.streaming.check-interval' = '5',
      'read.streaming.start-commit' = 'earliest'
      );
      
      CREATE TABLE kafka(
      uuid VARCHAR(20),
      name VARCHAR(10),
      age INT,
      ts INT,
      `p` VARCHAR(20)
      ) WITH (
      'connector' = 'kafka',
      'topic' = 'readhudi',
      'properties.bootstrap.servers' = 'IP address of the Kafka broker instance:Kafka port',
      'properties.group.id' = 'testGroup1',
      'scan.startup.mode' = 'latest-offset',
      'format' = 'json',
      'properties.sasl.kerberos.service.name' = 'kafka',--This parameter is not required for clusters in normal mode. Delete the comma (,) in the previous line.
      'properties.security.protocol' = 'SASL_PLAINTEXT',--This parameter is not required for clusters in normal mode.
      'properties.kerberos.domain.name' = 'hadoop.System domain name'--This parameter is not required for clusters in normal mode.
      );
      
      insert into 
      kafka 
      select 
      * 
      from 
      stream_mor union all select * from stream_write_cow;
    NOTE:
    • Kafka port
      • Value of sasl.port when Authentication Mode of the cluster is Security Mode, 21007 by default.
      • Value of port when Authentication Mode of the cluster is Normal Mode, 9092 by default. If the port number is set to 9092, set allow.everyone.if.no.acl.found to true. The procedure is as follows:

        Log in to FusionInsight Manager and choose Cluster > Services > Kafka. On the page that is displayed, click the Configurations tab then the All Configurations sub-tab. On the displayed page, search for allow.everyone.if.no.acl.found, set it to true, and click Save.

    • System domain name: You can log in to FusionInsight Manager, choose System > Permission > Domain and Mutual Trust, and check the value of Local Domain.

  3. Execute the following script to write data to Kafka. For details, see Managing Messages in Kafka Topics.

    sh kafka-console-producer.sh --broker-list IP address of the node where Kafka instances are deployed:Kafka port --topic Topic name --producer.config Client directory/Kafka/kafka/config/producer.properties

    In this example, the topic name is writehudi.

    sh kafka-console-producer.sh --broker-list IP address of the node where the Kafka instance is deployed:Kafka port ----topic writehudi --producer.config /opt/client/Kafka/kafka/config/producer.properties

    Enter the message content.
    {"uuid": "1","name":"a01","age":10,"ts":10,"p":"1"}
    {"uuid": "2","name":"a02","age":20,"ts":20,"p":"2"}

    Press Enter to send the message.

  4. Consumes Kafka topic data and reads the result of reading the Hudi table from Flink streams.

    sh kafka-console-consumer.sh --bootstrap-server IP address of the node where the Kafka Broker instance is deployed:Kafka port --topic Topic name --consumer.config Client directory/Kafka/kafka/config/consumer.properties --from-beginning

    In this example, the topic name is readhudi.

    sh kafka-console-consumer.sh --bootstrap-server IP address of the node where the Kafka Broker instance is deployed:Kafka port --topic readhudi --consumer.config /opt/client/Kafka/kafka/config/consumer.properties --from-beginning

    The read result is as follows (the sequence is not fixed):

    {"uuid": "1","name":"a01","age":10,"ts":10,"p":"1"}
    {"uuid": "2","name":"a02","age":20,"ts":20,"p":"2"}
    {"uuid": "1","name":"a01","age":10,"ts":10,"p":"1"}
    {"uuid": "2","name":"a02","age":20,"ts":20,"p":"2"}

Precautions for Using FlinkSQL Lookup Join Hudi

(This topic is available for MRS 3.5.0 and later.)

  • The lookup.join.cache.ttl parameter is used to control the loading period of dimension table data. The default value is 60 minutes.
  • Do not use a Hudi table with over 100,000 rows of records as a dimension table, as the data will be loaded to Flink TaskManager Heap memory.
  • The new and updated data in the dimension table can be loaded for calculation only after the next loading period.
The following is a SQL example:
CREATE TABLE hudimor(
  uuid VARCHAR(20),
  name VARCHAR(10),
  age INT,
  ts INT,
  `p` VARCHAR(20),
  PRIMARY KEY (uuid) NOT ENFORCED
) PARTITIONED BY (`p`) WITH (
  'connector' = 'hudi',
  'path' = 'hdfs://hacluster/tmp/hudimor',
  'table.type' = 'MERGE_ON_READ',
  'hoodie.datasource.write.recordkey.field' = 'uuid',
  'write.precombine.field' = 'ts',
  'lookup.join.cache.ttl' = '60min'
);
CREATE TABLE datagen(uuid varchar(20), proctime as PROCTIME()) WITH (
  'connector' = 'datagen',
  'rows-per-second' = '1'
);
CREATE TABLE blackhole (
  uuid VARCHAR(20),
  name VARCHAR(10),
  age INT,
  ts INT,
  `p` VARCHAR(20)
) WITH ('connector' = 'blackhole');
insert into
  blackhole
select
  t1.uuid as uuid,
  t2.name as name,
  t2.age as age,
  t2.ts as ts,
  t2.p as p
FROM
  datagen AS t1
  left JOIN hudimor FOR SYSTEM_TIME AS OF t1.proctime AS t2 ON t1.uuid = t2.uuid;

WITH Parameters

Table 2 WITH parameters

Mode

Configuration Item

Mandatory

Default Value

Description

Read

read.tasks

No

4

Parallelism of the tasks for reading the Hudi table.

read.streaming.enabled

No

false

Whether to enable stream read.

read.streaming.start-commit

No

By default, the latest commit is the start position.

Start position (closed interval) of incremental stream and batch consumption in yyyyMMddHHmmss format.

read.end-commit

No

By default, the latest commit is the end position.

End position (closed interval) of incremental stream and batch consumption in yyyyMMddHHmmss format.

Write

write.tasks

No

4

Parallelism of the tasks for reading data from the Hudi table.

index.bootstrap.enabled

No

false

Whether to enable index loading. If it is enabled, the latest data in the stored table is loaded to the state at a time.

If incremental data needs to be synchronized to full data and there are offline Hoodie tables available, you can enable the index loading function to write data in real time and ensure that data is unique.

write.index_bootstrap.tasks

No

4

If indexes are loaded slowly when a job is started, you can choose a larger value for this parameter. After that, the efficiency is improved, but checkpoints are blocked in the bootstrap phase.

compaction.async.enabled

No

true

Whether to enable online compaction

compaction.schedule.enabled

No

true

Whether to generate a compression plan periodically. You are advised to enable this function even if online compaction is disabled.

compaction.tasks

No

10

Parallelism of the tasks for compacting data in the Hudi table.

index.state.ttl

No

7D

Duration for storing indexes. The default value is 7 days. If the value is less than 0, indexes are stored permanently.

Indexes are the core data structure for determining whether data is duplicate. For long-time updates, for example, updating data generated one month ago, you need to increase the value of this parameter.

Synchronizing Metadata from Flink On Hudi to Hive

After this feature is enabled, Flink automatically creates a Hudi table on Hive and adds partitions to the Hudi table when writing data to it. Then services such as SparkSQL and Hive can read data form the Hudi table.

The metadata can be synchronized with either of the following methods. The JDBC mode is used as an example in the following steps.

This step is required for MRS 3.2.0 or later.
  • Synchronizing metadata to Hive in JDBC mode
    CREATE TABLE stream_mor(
    uuid VARCHAR(20),
    name VARCHAR(10),
    age INT,
    ts INT,
    `p` VARCHAR(20)
    ) PARTITIONED BY (`p`) WITH (
    'connector' = 'hudi',
    'path' = 'hdfs://hacluster/tmp/hudi/stream_mor',
    'table.type' = 'MERGE_ON_READ',
    'hive_sync.enable' = 'true',
    'hive_sync.table' = 'Name of the table to be synchronized to Hive',
    'hive_sync.db' = 'Name of the database to be synchronized to Hive',
    'hive_sync.metastore.uris' = 'Value of hive.metastore.uris in the hive-site.xml file on the Hive client',
    'hive_sync.jdbc_url' = 'Value of CLIENT_HIVE_URI in the component_env file on the Hive client'
    );
    NOTICE:
    • hive_sync.jdbc_url: If the value of CLIENT_HIVE_URI in the Hive client file component_env contains \, delete \.
    • To use the Hive style partitioning, add the following parameters:
      • 'hoodie.datasource.write.hive_style_partitioning' = 'true'
      • 'hive_sync.partition_extractor_class' = 'org.apache.hudi.hive.MultiPartKeysValueExtractor'
    • Flink on Hudi synchronizes data to Hive. Hudi is case sensitive, while Hive is case insensitive. You are not advised to use uppercase letters in fields of Hudi tables. Otherwise, data may fail to be read or written.
  • Synchronizing metadata to Hive in HMS mode
    CREATE TABLE stream_mor(
    uuid VARCHAR(20),
    name VARCHAR(10),
    age INT,
    ts INT,
    `p` VARCHAR(20)
    ) PARTITIONED BY (`p`) WITH (
    'connector' = 'hudi',
    'path' = 'hdfs://hacluster/tmp/hudi/stream_mor',
    'table.type' = 'MERGE_ON_READ',
    'hive_sync.enable' = 'true',
    'hive_sync.table' = 'Name of the table to be synchronized to Hive',
    'hive_sync.db' = 'Name of the database to be synchronized to Hive',
    'hive_sync.mode' = 'hms',
    'hive_sync.metastore.uris' = 'Value of hive.metastore.uris in the hive-site.xml file on the Hive client',
    'properties.hive.metastore.kerberos.principal' = 'Value of hive.metastore.kerberos.principal in the hive-site.xml file on the Hive client'
    );

Methods with JDBC:

  1. Log in to Manager as user flink_admin and choose Cluster > Services > Flink. In the Basic Information area, click the link on the right of Flink WebUI to access the Flink web UI.
  2. Create a Flink SQL job by referring to Creating a FlinkServer Job. On the job development page, configure the job as follows: Enter the SQL statement. After the SQL statement passes the verification, start the job.

    In Basic Parameter, select Enable CheckPoint, set Time Interval(ms) to 60000, and retain the default value for Mode.

    CREATE TABLE stream_mor2(
    uuid VARCHAR(20),
    name VARCHAR(10),
    age INT,
    ts INT,
    `p` VARCHAR(20)
    ) PARTITIONED BY (`p`) WITH (
    'connector' = 'hudi',
    'path' = 'hdfs://hacluster/tmp/hudi/stream_mor2',
    'table.type' = 'MERGE_ON_READ',
    'hoodie.datasource.write.recordkey.field' = 'uuid',
    'write.precombine.field' = 'ts',
    'write.tasks' = '4',
    'hive_sync.enable' = 'true',
    'hive_sync.table' = 'Name of the table to be synchronized to Hive, for example, stream_mor2',
    'hive_sync.db' = 'Name of the database to be synchronized to Hive, for example, default',
    'hive_sync.metastore.uris' = 'Value of hive.metastore.uris in the hive-site.xml file on the Hive client',
    'hive_sync.jdbc_url' = 'Value of CLIENT_HIVE_URI in the component_env file on the Hive client'
    );
    CREATE TABLE datagen (
    uuid varchar(20), name varchar(10), age int, ts INT, p varchar(20)
    ) WITH (
    'connector' = 'datagen',
    'rows-per-second' = '1',
    'fields.p.length' = '1'
    );insert into stream_mor2 select * from datagen;

  3. Wait for the Flink job to run for a period of time and continuously write the random test data generated by datagen to the Hudi table. You can click More > Job Monitoring to go to the native UI of Flink and view the job status.
  4. Log in to the node where the client is deployed, load environment variables, run the beeline command to log in to the Hive client, and run SQL statements to check whether the Hudi Sink table is successfully created on Hive and whether data can be read from the table.

    cd /opt/hadoopclient

    source bigdata_env

    beeline

    desc formatted default.stream_mor2;

    select * from default.stream_mor2 limit 5;

    show partitions default.stream_mor2;

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback