Help Center/ MapReduce Service/ User Guide/ Alarm Reference (Applicable to MRS 3.x)/ ALM-14009 Number of Dead DataNodes Exceeds the Threshold
Updated on 2022-09-26 GMT+08:00

ALM-14009 Number of Dead DataNodes Exceeds the Threshold

Description

The system periodically detects the number of dead DataNodes in the HDFS cluster every 30 seconds, and compares the number with the threshold. The number of DataNodes in the Dead state has a default threshold. This alarm is generated when the number exceeds the threshold.

You can change the threshold in O&M > Alarm > Thresholds > Name of the desired cluster > HDFS.

When the Trigger Count is 1, this alarm is cleared when the number of Dead DataNodes is less than or equal to the threshold. When the Trigger Count is greater than 1, this alarm is cleared when the number of Dead DataNodes is less than or equal to 90% of the threshold.

Attribute

Alarm ID

Alarm Severity

Automatically Cleared

14009

Major

Yes

Parameters

Name

Meaning

Source

Specifies the cluster for which the alarm is generated.

ServiceName

Specifies the service for which the alarm is generated.

RoleName

Specifies the role for which the alarm is generated.

HostName

Specifies the host for which the alarm is generated.

NameServiceName

Specifies the NameService for which the alarm is generated.

Trigger condition

Specifies the threshold triggering the alarm. If the current indicator value exceeds this threshold, the alarm is generated.

Impact on the System

DataNodes that are in the Dead state cannot provide HDFS services.

Possible Causes

  • DataNodes are faulty or overloaded.
  • The network between the NameNode and the DataNode is disconnected or busy.
  • NameNodes are overloaded.
  • The NameNodes are not restarted after the DataNode is deleted.

Procedure

Check whether DataNodes are faulty.

  1. On the FusionInsight Manager portal, choose Cluster > Name of the desired cluster > Services > HDFS. The HDFS Status page is displayed.
  2. In the Basic Information area, click NameNode(Active) to go to the HDFS WebUI.

    By default, the admin user does not have the permissions to manage other components. If the page cannot be opened or the displayed content is incomplete when you access the native UI of a component due to insufficient permissions, you can manually create a user with the permissions to manage that component.

  3. On the HDFS WebUI, click the Datanodes tab. In the In operation area, click Filter to check whether down is in the drop-down list.

    • If yes, select down, record the information about the filtered DataNodes, and go to 4.
    • If no, go to 8.

  4. On the FusionInsight Manager portal, choose Cluster > Name of the desired cluster > Services > HDFS > Instance to check whether recorded DataNodes exist in the instance list.

    • If all recorded DataNodes exist, go to 5.
    • If none of the recorded DataNodes exists, go to 6.
    • If some of the recorded DataNodes exist, go to 7.

  5. Locate the DataNode instance, click More > Restart Instance to restart it and check whether the alarm is cleared.

    • If yes, no further action is required.
    • If no, go to 8.

  6. Select all NameNode instances, choose More > Instance Rolling Restart to restart them and check whether the alarm is cleared.

    • If yes, no further action is required.
    • If no, go to 16.

  7. Select all NameNode instances, choose More > Instance Rolling Restart to restart them. Locate the DataNode instance, click More > Restart Instance to restart it and check whether the alarm is cleared.

    • If yes, no further action is required.
    • If no, go to 8.

Check the status of the network between the NameNode and the DataNode.

  1. Log in to the faulty DataNode on the management page as user root, and run the ping IP address of the NameNode command to check whether the network between the DataNode and the NameNode is abnormal.

    On the FusionInsight Manager page, choose Cluster > Name of the desired cluster > Services > HDFS > Instance. In the instance list, view the service plane IP address of the faulty DataNode.

    • If yes, go to 9.
    • If no, go to 10.

  2. Rectify the network fault, and check whether the alarm is cleared.

    • If yes, no further action is required.
    • If no, go to 10.

Check whether the DataNode is overloaded.

  1. On the FusionInsight Manager portal, choose O&M > Alarm > Alarms and check whether the alarm ALM-14008 HDFS DataNode Memory Usage Exceeds the Threshold exists.

    • If yes, go to 11.
    • If no, go to 13.

  2. See ALM-14008 HDFS DataNode Memory Usage Exceeds the Threshold to handle the alarm and check whether the alarm is cleared.

    • If yes, go to 12.
    • If no, go to 13.

  3. Check whether the alarm is cleared from the alarm list.

    • If yes, no further action is required.
    • If no, go to 13.

Check whether the NameNode is overloaded.

  1. On the FusionInsight Manager portal, choose O&M > Alarm > Alarms and check whether the alarm ALM-14007 HDFS NameNode Memory Usage Exceeds the Threshold exists.

    • If yes, go to 14.
    • If no, go to 16.

  2. See ALM-14007 HDFS NameNode Memory Usage Exceeds the Threshold to handle the alarm and check whether the alarm is cleared.

    • If yes, go to 15.
    • If no, go to 16.

  3. Check whether the alarm is cleared from the alarm list.

    • If yes, no further action is required.
    • If no, go to 16.

Collect fault information.

  1. On the FusionInsight Manager portal, choose O&M > Log > Download.
  2. Select HDFS in the required cluster from the Service.
  3. Click in the upper right corner, and set Start Date and End Date for log collection to 10 minutes ahead of and after the alarm generation time, respectively. Then, click Download.
  4. Contact the O&M personnel and send the collected logs.

Alarm Clearing

After the fault is rectified, the system automatically clears this alarm.

Related Information

None