Help Center/ MapReduce Service/ Troubleshooting/ Using Spark/ Spark Is Unavailable Due to Insufficient Node Disk Capacity
Updated on 2025-12-01 GMT+08:00

Spark Is Unavailable Due to Insufficient Node Disk Capacity

Symptom

The disk capacity of the cluster created by the user is insufficient. As a result, an alarm is generated, indicating that Spark, Hive, and YARN services are unavailable.

Cause Analysis

Insufficient cluster disk capacity affects HDFS data writing. When the HDFS disk space usage exceeds the threshold, HDFS becomes abnormal.

As a result, Spark, Hive, and Yarn are unavailable.

The alarm indicating that Spark, Hive, and Yarn are unavailable is generated due to insufficient disk capacity of the cluster. After the disk capacity is expanded, the alarm is cleared. Therefore, it can be determined that the HDFS function fault is caused by insufficient disk capacity.

Procedure

For details about how to clear the alarm triggered by insufficient disk capacity, see ALM-12017 Insufficient Disk Capacity.

Related Information

For details about how to solve the problem that HDFS disk usage exceeds the threshold, see ALM-14001 HDFS Disk Usage Exceeds the Threshold.