ALM-14002 DataNode Disk Usage Exceeds the Threshold
Alarm Description
The system checks the DataNode disk usage every 30 seconds and compares the actual disk usage with the threshold. A default threshold range is provided for the DataNode disk usage. This alarm is generated when the DataNode disk usage exceeds the threshold.
You can choose O&M > Alarm > Thresholds > HDFS and change the threshold.
If Trigger Count is 1, this alarm is cleared when the DataNode disk usage is less than or equal to the threshold. If Trigger Count is greater than 1, this alarm is cleared when the DataNode disk usage is less than or equal to 80% of the threshold.
Alarm Attributes
Alarm ID |
Alarm Severity |
Auto Cleared |
---|---|---|
14002 |
Major |
Yes |
Alarm Parameters
Parameter |
Description |
---|---|
Source |
Specifies the cluster for which the alarm was generated. |
ServiceName |
Specifies the service for which the alarm was generated. |
RoleName |
Specifies the role for which the alarm was generated. |
HostName |
Specifies the host for which the alarm was generated. |
Trigger Condition |
Specifies the threshold for triggering the alarm. |
Impact on the System
Insufficient disk space will impact data write to HDFS.
Possible Causes
- The disk space configured for the HDFS cluster is insufficient.
- Data skew occurs among DataNodes.
Handling Procedure
Check whether the cluster disk capacity is insufficient.
- On FusionInsight Manager, choose O&M > Alarm > Alarms, and check whether the ALM-14001 HDFS Disk Usage Exceeds the Threshold alarm exists.
- Handle the alarm by following the instructions in ALM-14001 HDFS Disk Usage Exceeds the Threshold and check whether the alarm is cleared.
- Choose O&M > Alarm > Alarms and check whether the alarm is cleared.
- If yes, no further action is required.
- If no, go to Step 4.
Check the balance status of DataNodes.
- On FusionInsight Manager, choose Hosts. Check whether the number of DataNodes on each rack is almost the same. If the difference is large, adjust the racks to which DataNodes belong to ensure that the number of DataNodes on each rack is almost the same. Restart the HDFS service for the settings to take effect.
The service is unavailable during the restart, and upper-layer services that depend on the service will also be affected.
- Choose Cluster > Services > HDFS.
- In the Basic Information area, click NameNode(Active). The HDFS web UI is displayed.
By default, the admin user does not have the permissions to manage other components. If the page cannot be opened or the displayed content is incomplete when you access the native UI of a component due to insufficient permissions, you can manually create a user with the permissions to manage that component. For details, see Creating an HDFS Role.
- In the Summary area of the HDFS web UI, check whether the value of Max is 10% greater than that of Median in DataNodes usages.
- Balance skewed data in the cluster.
Log in to the node where the MRS cluster client is installed, as user root.
- If Kerberos authentication is not enabled for the MRS cluster, run the following command to switch to user omm:
su - omm
- Run the following command to go to the client installation directory and load environment variables:
cd Client installation directory
source bigdata_env
- If Kerberos authentication is enabled for the cluster, complete security authentication:
kinit Username of the service user who has the HDFS operation permissions
Enter the required password.
- If Kerberos authentication is not enabled for the MRS cluster, run the following command to switch to user omm:
- Run the following command to balance data distribution:
hdfs balancer -threshold 10
-threshold < Percentage >: specifies the balancing threshold (10% by default).
When the difference between the node storage usage and the average cluster storage usage exceeds the threshold, data migration is triggered.
- Wait several minutes and check whether the alarm is cleared.
- If yes, no further action is required.
- If no, go to Step 11.
Collect the fault information.
- On FusionInsight Manager, choose O&M. In the navigation pane on the left, choose Log > Download.
- Expand the drop-down list next to the Service field. In the Services dialog box that is displayed, select HDFS for the target cluster.
- Click
in the upper right corner, and set Start Date and End Date for log collection to 10 minutes ahead of and after the alarm generation time, respectively. Then, click Download.
- Contact O&M personnel and provide the collected logs.
Alarm Clearance
This alarm is automatically cleared after the fault is rectified.
Related Information
None
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot