Updated on 2025-04-21 GMT+08:00

Hudi Data Table Compaction Specifications

MOR tables update data in the form of row logs, which need to be merged by primary key when read, making log read efficiency much lower than Parquet. To solve the log read performance problem, Hudi compresses logs into Parquet files through compaction, significantly improving read performance.

Rules

  • For tables with continuous data writing, perform compaction at least once every 24 hours.

    For MOR tables, whether streaming or batch writing, it is necessary to ensure that at least one compaction operation is completed daily. If compaction is not performed for a long time, the Hudi table's logs will grow larger, which will lead to the following problems:

    • Hudi table reads become very slow and require a lot of resources. This is because reading MOR tables involves log merging, which requires consuming many resources and is very slow with large logs.
    • Long-duration compaction requires a lot of resources and can easily lead to OOM.
    • Blocks cleaning. If compaction operations do not produce new versions of parquet files, old version files cannot be cleaned, increasing storage pressure.
  • When submitting a Spark Jar job, the CPU to memory ratio should be 1:4 to 1:8.

    Compaction jobs merge data in existing parquet files with data in new logs, consuming high memory resources. According to previous table design specifications and actual traffic fluctuations, you are advised to configure the compaction job's CPU to memory ratio as 1:4 to 1:8 to ensure the stable operation of compaction jobs. If compaction encounters OOM issues, increasing the memory proportion can resolve them.

Recommendations

  • Improve compaction performance by increasing concurrency.

    A reasonable CPU and memory ratio configuration ensures that the compaction job is stable, achieving stable operation of individual compaction tasks. However, the overall runtime of the compaction depends on the number of files processed in this compaction and the allocated CPU cores (concurrency). Therefore, increasing the number of CPU cores for the compaction job can improve compaction performance (note that increasing CPUs should also maintain the CPU to memory ratio).

  • Use asynchronous compaction for Hudi tables.

    To ensure the stable operation of streaming ingestion jobs, it is necessary to ensure that streaming jobs do not perform other tasks during real-time ingestion, such as doing compaction while Flink writes to Hudi. This seems like a good solution as it completes ingestion and compaction. However, compaction operations are very memory and IO-intensive and will impact the streaming ingestion job as follows:

    • Increased end-to-end latency: Compaction amplifies write latency because it is more time-consuming than ingestion.
    • Unstable job: Compaction adds more instability to the ingestion job, and compaction OOM will directly cause the entire job to fail.
  • Perform compaction every 2 to 4 hours.

    Compaction is a crucial and necessary maintenance method for MOR tables. For real-time tasks, the compaction merging process must be decoupled from real-time tasks. This is achieved by scheduling Spark tasks periodically for asynchronous compaction. The key to this solution is setting a reasonable period. If the period is too short, Spark tasks may run idle. If too long, many compaction plans may accumulate without being executed, leading to long Spark task durations and high downstream read task latency. Based on this scenario, here are some suggestions: according to cluster resource usage, schedule asynchronous compaction jobs every 2 or 4 hours, which is a basic maintenance plan for MOR tables.

  • Perform compaction asynchronously using Spark instead of Flink.

    The recommended approach for Flink writing to Hudi is for Flink to handle data writing and compaction planning only. Submit Spark SQL or Spark Jar jobs asynchronously to perform compaction, clean, and archive tasks. The compaction plan generation is lightweight, with minimal impact on the Flink writing job.

    The specific steps for implementing this plan are as follows:

    • Flink handles data writing and compaction planning only.

      Add the following parameters to the Flink stream task's table creation statement/SQL hints to control Flink tasks writing to Hudi to only generate a compaction plan.

      'compaction.async.enabled' = 'false'      // Disable Flink executing compaction tasks.
      'compaction.schedule.enabled' = 'true' // Enable compaction plan generation.
      'compaction.delta_commits' = '5'          // MOR table defaults to attempt generating a compaction plan every 5 checkpoints; this parameter needs adjustment based on service requirements.
      'clean.async.enabled' = 'false'           // Disable clean operation.
      'hoodie.archive.automatic' = 'false'      // Disable archive operation.
    • Perform compaction plans execution, clean, and archive tasks offline with Spark.

      On the scheduling platform (for example, DataArts), run a scheduled offline task to let Spark complete the Hudi table's compaction plan execution, clean, and archive tasks.

      For SQL jobs, add the following configurations:

      hoodie.archive.automatic = false;
      hoodie.clean.automatic = false;
      hoodie.compact.inline = true;
      hoodie.run.compact.only.inline=true;
      hoodie.cleaner.commits.retained = 500;  // Clean retains the latest 500 deltacommit data files on the timeline; earlier versions will be cleaned. This value should be greater than the compaction.delta_commits setting and needs adjustment based on service requirements.
      hoodie.keep.max.commits = 700;  // The timeline retains a maximum of 700 delta commits.
      hoodie.keep.min.commits = 501;  // The timeline retains at least 500 delta commits. This value should be greater than hoodie.cleaner.commits.retained and needs adjustment based on service requirements.

      Then, keep the above configurations and schedule the following SQL in order:

      run compaction on <database name>. <table name>;   // Execute the compaction plan.
      run clean on <database name>. <table name>;        // Execute the clean operation.
      run archivelog on <database name>.<table name>;    // Execute the archive operation.
  • Asynchronous compaction can serialize multiple tables into one job, grouping tables with similar resource configurations. The resource requirement for this job group is based on the table with the highest resource consumption.

    For asynchronous compaction tasks mentioned in Use asynchronous compaction for Hudi tables. and Perform compaction asynchronously using Spark instead of Flink., here are some development suggestions:

    • You do not need to develop asynchronous compaction tasks for each Hudi table, as this increases development costs.
    • Asynchronous compaction tasks can be completed by submitting Spark SQL jobs or handling compaction, clean, and archive for multiple tables in Spark jar tasks:
      hoodie.clean.async = true;
      hoodie.clean.automatic = false;
      hoodie.compact.inline = true;
      hoodie.run.compact.only.inline=true;
      hoodie.cleaner.commits.retained = 500;
      hoodie.keep.min.commits = 501;
      hoodie.keep.max.commits = 700;
      Schedule the following SQL in order:
      run compaction on <database name>. <table1>;
      run clean on <database name>. <table1>;
      run archivelog on <database name>.<table1>;
      run compaction on <database name>.<table2>;
      run clean on <database name>.<table2>;
      run archivelog on <database name>.<table2>;