COMPACTION
Function
This command is used to convert row-based log files in MOR tables into column-based data files in parquet tables to accelerate record search.
Syntax
SCHEDULE COMPACTION on tableIdentifier |tablelocation;
SHOW COMPACTION on tableIdentifier |tablelocation;
RUN COMPACTION on tableIdentifier |tablelocation [at instant-time];
Parameter Description
Parameter |
Description |
---|---|
tableIdentifier |
Name of the Hudi table to convert log files. |
tablelocation |
Storage path of the Hudi table. |
instant-time |
Time to run the command. You can run the show compaction command to view the instant-time value. |
Example
schedule compaction on h1; show compaction on h1; run compaction on h1 at 20210915170758; schedule compaction on 'obs://bucket/path/h1'; run compaction on 'obs://bucket/path/h1';
Caveats
- When triggering compaction for Hudi tables created via SQL using the API method, you need to set hoodie.payload.ordering.field to the value of preCombineField.
- When using the metadata service provided by DLI, this command does not support OBS paths.
System Response
You can check if the job status is successful, view the job result, and review the job logs to confirm if there are any exceptions.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot