Why Cannot I Query Newly Inserted Data in a Parquet Hive Table Using SparkSQL?
Question
Why cannot I query newly inserted data in a parquet Hive table using SparkSQL? This problem occurs in the following scenarios:
- For partitioned tables and non-partitioned tables, after data is inserted on the Hive client, the latest inserted data cannot be queried using SparkSQL.
- After data is inserted into a partitioned table using SparkSQL, if the partition information remains unchanged, the newly inserted data cannot be queried using SparkSQL.
Answer
To improve Spark performance, parquet metadata is cached. When the parquet table is updated by Hive or another means, the cached metadata remains unchanged, resulting in SparkSQL failing to query the newly inserted data.
For a parquet Hive partition table, if the partition information remains unchanged after data is inserted, the cached metadata is not updated. As a result, the newly inserted data cannot be queried by SparkSQL.
To solve the query problem, update metadata before starting a Spark SQL query.
REFRESH TABLE table_name;
table_name indicates the name of the table to be updated. The table must exist. Otherwise, an error is reported.
When the query statement is executed, the latest inserted data can be obtained.
For details, visit https://spark.apache.org/docs/3.1.1/sql-programming-guide.html#metadata-refreshing.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot