Updated on 2025-04-15 GMT+08:00

Hudi Table Model Design Specifications

Rules

  • Hudi tables must have a reasonable primary key.

    Hudi tables provide data update and idempotent write capabilities, which require the tables to have a primary key. Setting an inappropriate primary key can result in data duplication. The primary key can be a single key or a composite key, but it must not have null or empty values. Here is an example of setting a primary key.

    SparkSQL:

    -- Specify the primary key using primaryKey. If it is a composite primary key, separate multiple keys with commas (,).
    create table hudi_table (
    id1 int,
    id2 int,
    name string,
    price double
    ) using hudi
    options (
    primaryKey = 'id1,id2',
    preCombineField = 'price'
    );

    SparkDataSource:

    --Specify the primary key through hoodie.datasource.write.recordkey.field.
    df.write.format("hudi").
    option("hoodie.datasource.write.table.type", COPY_ON_WRITE).
    option("hoodie.datasource.write.precombine.field", "price").
    option("hoodie.datasource.write.recordkey.field", "id1,id2").
    

    FlinkSQL:

    --Specify the primary key through hoodie.datasource.write.recordkey.field.
    create table hudi_table(
    id1 int,
    id2 int,
    name string,
    price double
    ) partitioned by (name) with (
    'connector' = 'hudi',
    'hoodie.datasource.write.recordkey.field' = 'id1,id2',
    'write.precombine.field' = 'price')
  • The precombine field must be configured for Hudi tables.

    During data synchronization, it is inevitable to encounter issues like duplicate data writes and data disorder, such as recovering abnormal data or restarting write programs. By setting a reasonable value for the precombine field, data accuracy can be ensured, and old data will not overwrite new data, achieving idempotent writes. The precombine field can be a timestamp of updates in the service table or the commit timestamp of the database. The precombine field must not have null or empty values. Here is an example of setting the precombine field.

    SparkSQL:

    --Specify the precombine field using preCombineField.
    create table hudi_table (
    id1 int,
    id2 int,
    name string,
    price double
    ) using hudi
    options (
    primaryKey = 'id1,id2',
    preCombineField = 'price'
    );

    SparkDatasource:

    --Specify the precombine field using hoodie.datasource.write.precombine.field.
    df.write.format("hudi").
    option("hoodie.datasource.write.table.type", COPY_ON_WRITE).
    option("hoodie.datasource.write.precombine.field", "price").
    option("hoodie.datasource.write.recordkey.field", "id1,id2").
    

    Flink:

    --Specify the precombine field using write.precombine.field.
    create table hudi_table(
    id1 int,
    id2 int,
    name string,
    price double
    ) partitioned by (name) with (
    'connector' = 'hudi',
    'hoodie.datasource.write.recordkey.field' = 'id1,id2',
    'write.precombine.field' = 'price')
  • MOR tables are used for stream computing.

    Stream computing is low-latency real-time processing and requires high-performance streaming read/write capabilities. Among the Merge on Read (MOR) and Copy on Write (COW) models in Hudi tables, MOR tables have better performance for streaming read/write operations. Therefore, MOR tables are preferred for stream computing scenarios. Here is a comparison of the read/write performance between MOR and COW tables.

    Dimension

    MOR Table

    COW Table

    Stream write

    High

    Low

    Stream read

    High

    Low

    Batch write

    High

    Low

    Batch read

    Low

    High

  • MOR tables are used for real-time data ingestion.

    Real-time data ingestion usually requires performance within minutes or at a minute level. Considering the comparison between the two table models in Hudi, MOR tables are recommended for real-time data ingestion scenarios.

  • Hudi table names and column names should be in lowercase.

    When multiple engines read and write the same Hudi table, using lowercase letters for table and column names can avoid inconsistencies in case sensitivity support between engines.

Recommendations

  • For Spark batch processing scenarios with low requirements for write latency, use COW tables.

    In the COW model, there is a write amplification issue, resulting in slower write speed. However, COW tables have excellent read performance. Since batch processing is not very sensitive to write latency, COW tables can be used.

  • Hive metadata synchronization must be enabled for Hudi table write tasks.

    SparkSQL is naturally integrated with Hive, so there is no need to consider metadata issues. This recommendation is for scenarios where Hudi tables are written through the Spark DataSource API or Flink. When writing to Hudi using these two methods, the configuration for synchronizing metadata with Hive needs to be added. The purpose of this configuration is to unify the metadata of Hudi tables in the Hive metadata service, providing convenience for cross-engine data operations and data management in the future.