Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive

Query Type

Updated on 2025-02-22 GMT+08:00

Snapshot Queries

Snapshot queries allow you to read the latest snapshots generated by commit/compaction. For MOR tables, it also merges the content of the latest delta log files in the query, providing near real-time data retrieval.

Incremental Queries

Incremental queries only retrieve data that has been added after a given commit/compaction.

Read Optimized Queries

Read optimized queries are specifically optimized for MOR tables and only read the latest snapshots generated by commit/compaction (excluding delta log files).
Table 1 Trade-off between real-time queries and read optimized queries

Trade-off

Real-Time Queries

Read Optimized Queries

Data latency

Low

High

Query latency

Only for MOR tables, high (combining Parquet and delta log files)

Low (Parquet file reading performance)

COW Table Queries

  • Real-time view reading (using SparkSQL an example): Directly read the Hudi table stored in the metadata service, where ${table_name} indicates the table name.
    select (fields or aggregate functions) from ${table_name};
  • Real-time view reading (using a Spark Jar job as an example):

    Spark Jar jobs can read Hudi tables in two ways: using the Spark datasource API or submitting SQL queries through SparkSession.

    Set the configuration item hoodie.datasource.query.type to snapshot (which is also the default value).

    object HudiDemoScala {
      def main(args: Array[String]): Unit = {
        val spark = SparkSession
          .builder()
          .enableHiveSupport()
          .config("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
          .config("spark.sql.extensions", "org.apache.spark.sql.hudi.HoodieSparkSessionExtension")
          .appName("HudiIncrementalReadDemo")
          .getOrCreate();
    
        // 1. Read Hudi tables using the Spark datasource API.
        val dataFrame = spark.read.format("hudi")
          .option("hoodie.datasource.query.type", "snapshot") // snapshot is also the default value. You can retain the default value.
          .load("obs://bucket/to_your_table"); // Specify the path of the Hudi table to read. DLI supports only OBS paths.
        dataFrame.show(100);
    
        // 2. Read Hudi tables by submitting SQL queries through SparkSession, which requires interconnection with the metadata service.
        spark.sql("select * from ${table_name}").show(100);
      }
    }
  • Incremental view reading (using Spark SQL as an example):

    Start by configuring:

    hoodie.${table_name}.consume.mode=INCREMENTAL
    hoodie.${table_name}.consume.start.timestamp=Start commit time
    hoodie.${table_name}.consume.end.timestamp=End commit time
    Run the following SQL statement:
    select (fields or aggregate functions) from ${table_name} where `_hoodie_commit_time`>'Start commit time' and `_hoodie_commit_time`<='End commit time' //This filtering condition is mandatory.
  • Incremental view reading (using a Spark Jar job as an example):

    The hoodie.datasource.query.type configuration item must be set to incremental.

    object HudiDemoScala {
      def main(args: Array[String]): Unit = {
        val spark = SparkSession
          .builder()
          .enableHiveSupport()
          .config("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
          .config("spark.sql.extensions", "org.apache.spark.sql.hudi.HoodieSparkSessionExtension")
          .appName("HudiIncrementalReadDemo")
          .getOrCreate();
    
        val startTime = "20240531000000";
        val endTime = "20240531000000";
        spark.read.format("hudi")
          .option("hoodie.datasource.query.type", "incremental") // Specify the query type as incremental query.
          .option("hoodie.datasource.read.begin.instanttime", startTime)  // Specify the start commit for incremental pull.
          .option("hoodie.datasource.read.end.instanttime", endTime)  // Specify the end commit for incremental pull.
          .load("obs://bucket/to_your_table")  // Specify the path of the hudi table to read.
          .createTempView("hudi_incremental_temp_view");  // Register as a temporary Spark table.
        // The results must be filtered based on startTime and endTime. If endTime is not specified, filtering only needs to be done based on startTime.
        spark.sql("select * from hudi_incremental_temp_view where `_hoodie_commit_time`>'20240531000000' and `_hoodie_commit_time`<='20240531321456'")
          .show(100, false);
      }
    }
  • Read optimized queries: Read optimized queries for COW tables is equivalent to snapshot queries.

MOR Table Queries

When using the metadata service in Spark SQL jobs or configuring HMS synchronization parameters, creating an MOR table will also create two additional tables: ${table_name}_rt and ${table_name}_ro. The table with the rt suffix represents real-time queries, while the table with the ro suffix represents read optimized queries. For example, if you create a Hudi table named ${table_name} using Spark SQL and synchronize it with the metadata service, two additional tables will be created in the database: ${table_name}_rt and ${table_name}_ro.

  • Real-time view reading (using Spark SQL as an example): Directly read the Hudi table with the _rt suffix in the same database.
    select count(*) from ${table_name}_rt;
  • Real-time view reading (using a Spark Jar job as an example): Same as COW table operations, refer to the relevant COW table operations.
  • Incremental view reading (using a Spark SQL job as an example): Same as COW table operations, refer to the relevant COW table operations.
  • Incremental view reading (using a Spark Jar job as an example): Same as COW table operations, refer to the relevant COW table operations.
  • Read optimized view reading (using a Spark Jar job as an example):

    The hoodie.datasource.query.type configuration item must be set to read_optimized.

    object HudiDemoScala {
      def main(args: Array[String]): Unit = {
        val spark = SparkSession.builder
          .enableHiveSupport
          .config("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
          .config("spark.sql.extensions", "org.apache.spark.sql.hudi.HoodieSparkSessionExtension")
          .appName("HudiIncrementalReadDemo")
          .getOrCreate
        spark.read.format("hudi")
          .option("hoodie.datasource.query.type", "read_optimized") // Specify the query type as read-optimized view.
          .load("obs://bucket/to_your_table") // Specify the path of the hudi table to read.
          .createTempView("hudi_read_optimized_temp_view")
        spark.sql("select * from hudi_read_optimized_temp_view").show(100)
      }
    }

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback