Updated on 2024-11-29 GMT+08:00

ALM-14000 HDFS Service Unavailable

Alarm Description

The system checks the NameService service status every 60 seconds. This alarm is generated when all the NameService services are abnormal and the system considers that the HDFS service is unavailable.

This alarm is cleared when at least one NameService service is normal and the system considers that the HDFS service recovers.

Alarm Attributes

Alarm ID

Alarm Severity

Alarm Type

Service Type

Auto Cleared

14000

Critical

Quality of service

HDFS

Yes

Alarm Parameters

Type

Parameter

Description

Location Information

Source

Specifies the cluster for which the alarm is generated.

ServiceName

Specifies the service for which the alarm is generated.

RoleName

Specifies the role for which the alarm is generated.

HostName

Specifies the host for which the alarm is generated.

Impact on the System

HDFS fails to provide services for HDFS service-based upper-layer components, such as HBase and MapReduce. As a result, users cannot read or write files.

Possible Causes

  • The ZooKeeper service is abnormal.
  • All NameService services are abnormal.
  • The number of service requests is too large, and the HDFS health check fails to read and write files.
  • The health check fails due to HDFS FullGC.

Handling Procedure

Check the ZooKeeper service status.

  1. On the FusionInsight Manager portal, choose O&M > Alarm > Alarms. On the Alarm page, check whether ALM-13000 ZooKeeper Service Unavailable is reported.

    • If yes, go to 2.
    • If no, go to 4.

  2. See ALM-13000 ZooKeeper Service Unavailable to rectify the health status of ZooKeeper fault and check whether the Running Status of the ZooKeeper service restores to Normal.

    • If yes, go to 3.
    • If no, go to 13.

  3. On the O&M > Alarm > Alarms page, check whether the alarm is cleared.

    • If yes, no further action is required.
    • If no, go to 4.

Handle the NameService service exception alarm.

  1. On the FusionInsight Manager portal, choose O&M > Alarm > Alarms. On the Alarms page, check whether ALM-14010 NameService Service Unavailable is reported.

    • If yes, go to 5.
    • If no, go to 7.

  2. See ALM-14010 NameService Service Unavailable to handle the abnormal NameService services and check whether each NameService service exception alarm is cleared.

    • If yes, go to 6.
    • If no, go to 13.

  3. On the O&M > Alarm > Alarms page, check whether the alarm is cleared.

    • If yes, no further action is required.
    • If no, go to 7.

Check whether the HDFS health check fails to read or write files due to a large number of service requests.

  1. On FusionInsight Manager, choose O&M > Alarm > Alarms, and check whether ALM-14021 NameNode Average RPC Processing Time Exceeds the Threshold or ALM-14022 NameNode Average RPC Queuing Time Exceeds the Threshold is generated.

    • If yes, go to 8.
    • If no, go to 10.

  2. Rectify the abnormal NameServices by following the handling methods of ALM-14021 NameNode Average RPC Processing Time Exceeds the Threshold and ALM-14022 NameNode Average RPC Queuing Time Exceeds the Threshold. Then, check whether the alarms are cleared.

    • If yes, go to 9.
    • If no, go to 13.

  3. On the O&M > Alarm > Alarms page, check whether the alarm is cleared.

    • If yes, no further action is required.
    • If no, go to 10.

Check whether the health check fails due to HDFS FullGC.

  1. On the FusionInsight Manager portal, choose O&M > Alarm > Alarms. On the Alarms page, check whether ALM-14014 NameNode GC Time Exceeds the Threshold is reported.

    • If yes, go to 11.
    • If no, go to 13.

  2. See ALM-14014 NameNode GC Time Exceeds the Threshold to handle the abnormal NameService services and check whether each NameService service exception alarm is cleared.

    • If yes, go to 12.
    • If no, go to 13.

  3. On the O&M > Alarm > Alarms page, check whether the alarm is cleared.

    • If yes, no further action is required.
    • If no, go to 13.

Collect fault information.

  1. On the FusionInsight Manager portal, choose O&M > Log > Download.
  2. Select the following nodes in the required cluster from the Service:

    • ZooKeeper
    • HDFS

  3. Click in the upper right corner, and set Start Date and End Date for log collection to 10 minutes ahead of and after the alarm generation time, respectively. Then, click Download.
  4. Contact the O&M engineers and send the collected logs.

Alarm Clearance

After the fault is rectified, the system automatically clears this alarm.

Related Information

None.