On this page
Help Center/ MapReduce Service/ Troubleshooting/ Using Hive/ An SQL Error Is Reported When the Number of MetaStore Dynamic Partitions Exceeds the Threshold

An SQL Error Is Reported When the Number of MetaStore Dynamic Partitions Exceeds the Threshold

Updated on 2024-12-18 GMT+08:00

Symptom

When the SparkSQL or HiveSQL command is executed, the following error message is displayed:

Number of dynamic partitions created is 2001, which is more than 2000. To slove this try to set hive.exec.max.dynamic.partitions to at least 2001

Cause Analysis

By default, Hive limits the maximum number of dynamic partitions, which is specified by the hive.exec.max.dynamic.partitions parameter. The default value is 1000. If the threshold is exceeded, Hive will not create new dynamic partitions.

Procedure

  • Adjust upper-layer services to ensure that the number of dynamic partitions is within the value range of hive.exec.max.dynamic.partitions.
  • Run the set hive.exec.max.dynamic.partitions = XXX; command to increase the value of hive.exec.max.dynamic.partitions.

    The spark.hadoop.hive.exec.max.dynamic.partitions parameter needs to be set in SparkSQL.

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback