Updated on 2022-12-14 GMT+08:00

Compaction and Cleaning Configurations

Parameter

Description

Default Value

hoodie.clean.automatic

Specifies whether to perform automatic cleanup.

true

hoodie.cleaner.policy

Specifies the cleaning policy to be used. Hudi will delete the Parquet file of an old version to reclaim space. Any query or computation referring to this version of the file will fail. You are advised to ensure that the data retention time exceeds the maximum query execution time.

KEEP_LATEST_COMMITS

hoodie.cleaner.commits.retained

Specifies the number of commits to retain. Data will be retained for num_of_commits * time_between_commits (scheduled). This also directly translates into the number of datasets can be incrementally pulled.

10

hoodie.keep.min.commits, hoodie.keep.max.commits

Each commit is a small file in the .hoodie directory. DFS typically does not support a large number of small files, so Hudi archives older commits into a sequential log. A commit is published atomically by renaming the commit file.

20

hoodie.commits.archival.batch

This parameter controls the number of commit instants read in memory as a batch and archived together.

10

hoodie.parquet.small.file.limit

The value must be smaller than that of maxFileSize. If maxFileSize is set to 0, this function is disabled. Small files always exist because of the large number of insert records in a partition of batch processing. Hudi provides an option to solve the problem of small files by masking inserts into this partition as updates to existing small files. The size here is the minimum file size that is considered as a "small file size".

104857600 byte

hoodie.copyonwrite.insert.split.size

Specifies the parallelism for inserting and writing data. It is the number of inserts grouped for a single partition. Writing out 100 MB files with at least 1 KB records means 100 KB records exist in each file. Overprovision to 500 KB by default. To improve insert latency, adjust the value to match the number of records in a single file. If it is set to a smaller value, the file size will shrink (especially when compactionSmallFileSize is set to 0).

500000

hoodie.copyonwrite.insert.auto.split

Specifies whether Hudi dynamically computes insertSplitSize based on the last 24 commit metadata. This function is disabled by default.

true

hoodie.copyonwrite.record.size.estimate

Specifies the average record size. If specified, Hudi will use this parameter and not compute dynamically based on the last 24 commit metadata. There is no default value. This is critical in computing the insert parallelism and packing inserts into small files.

1024

hoodie.compact.inline

If this parameter is set to true, compaction is triggered by the ingestion itself right after a commit or delta commit action as part of insert, upsert, or bulk_insert.

false

hoodie.compact.inline.max.delta.commits

Specifies the maximum number of delta commits to be retained before inline compression is triggered.

5

hoodie.compaction.lazy.block.read

When CompactedLogScanner merges all log files, this parameter helps to choose whether the logblocks should be read lazily. Set it to true to use I/O-intensive lazy block read (low memory usage) or false to use memory-intensive immediate block read (high memory usage).

false

hoodie.compaction.reverse.log.read

HoodieLogFormatReader reads a log file in the forward direction from pos=0 to pos=file_length. If this parameter is set to true, Reader reads a log file in reverse direction from pos=file_length to pos=0.

false

hoodie.cleaner.parallelism

Increase this parameter if cleaning becomes slow.

200

hoodie.compaction.strategy

Determines which file groups are selected for compaction during each compaction run. By default, Hudi selects the log file with most accumulated unmerged data.

org.apache.hudi.table.action.compact.strategy.

LogFileSizeBasedCompactionStrategy

hoodie.compaction.target.io

Specifies the number of MBs to spend during compaction run for LogFileSizeBasedCompactionStrategy. This parameter can limit ingestion latency when compaction is run in inline mode.

500 * 1024 MB

hoodie.compaction.daybased.target

Used by org.apache.hudi.io.compact.strategy.DayBasedCompactionStrategy to denote the number of latest partitions to compact during a compaction run.

10

hoodie.compaction.payload.class

It needs to be same as class used during insert or upsert. Similar to writing, compaction also uses the record payload class to merge records in the log against each other, merge again with the base file, and produce the final record to be written after compaction.

org.apache.hudi.common.model.OverwriteWithLatestAvroPayload

hoodie.schedule.compact.only.inline

Specifies whether to generate only a compression plan during a write operation. This parameter is valid only when hoodie.compact.inline is set to true.

false

hoodie.run.compact.only.inline

Specifies whether to perform only the compression operation when the run compaction command is executed using SQL. If the compression plan does not exist, no action is needed.

false