ALM-14000 HDFS Service Unavailable
Description
The system checks the NameService service status every 60 seconds. This alarm is generated when all the NameService services are abnormal and the system considers that the HDFS service is unavailable.
This alarm is cleared when at least one NameService service is normal and the system considers that the HDFS service recovers.
Attribute
Alarm ID |
Alarm Severity |
Automatically Cleared |
---|---|---|
14000 |
Critical |
Yes |
Parameters
Name |
Meaning |
---|---|
Source |
Specifies the cluster for which the alarm is generated. |
ServiceName |
Specifies the service for which the alarm is generated. |
RoleName |
Specifies the role for which the alarm is generated. |
HostName |
Specifies the host for which the alarm is generated. |
Impact on the System
HDFS fails to provide services for HDFS service-based upper-layer components, such as HBase and MapReduce. As a result, users cannot read or write files.
Possible Causes
- The ZooKeeper service is abnormal.
- All NameService services are abnormal.
- The number of service requests is too large, and the HDFS health check fails to read and write files.
- The health check fails due to HDFS FullGC.
Procedure
Check the ZooKeeper service status.
- On the FusionInsight Manager portal, choose O&M > Alarm > Alarms. On the Alarm page, check whether ALM-13000 ZooKeeper Service Unavailable is reported.
- See ALM-13000 ZooKeeper Service Unavailable to rectify the health status of ZooKeeper fault and check whether the Running Status of the ZooKeeper service restores to Normal.
- On the O&M > Alarm > Alarms page, check whether the alarm is cleared.
- If yes, no further action is required.
- If no, go to 4.
Handle the NameService service exception alarm.
- On the FusionInsight Manager portal, choose O&M > Alarm > Alarms. On the Alarms page, check whether ALM-14010 NameService Service Unavailable is reported.
- See ALM-14010 NameService Service Unavailable to handle the abnormal NameService services and check whether each NameService service exception alarm is cleared.
- On the O&M > Alarm > Alarms page, check whether the alarm is cleared.
- If yes, no further action is required.
- If no, go to 7.
Check whether the HDFS health check fails to read or write files due to a large number of service requests.
- On FusionInsight Manager, choose O&M > Alarm > Alarms, and check whether ALM-14021 NameNode Average RPC Processing Time Exceeds the Threshold or ALM-14022 NameNode Average RPC Queuing Time Exceeds the Threshold is generated.
- Rectify the abnormal NameServices by following the handling methods of ALM-14021 NameNode Average RPC Processing Time Exceeds the Threshold and ALM-14022 NameNode Average RPC Queuing Time Exceeds the Threshold. Then, check whether the alarms are cleared.
- On the O&M > Alarm > Alarms page, check whether the alarm is cleared.
- If yes, no further action is required.
- If no, go to 10.
Check whether the health check fails due to HDFS FullGC.
- On the FusionInsight Manager portal, choose O&M > Alarm > Alarms. On the Alarms page, check whether ALM-14014 NameNode GC Time Exceeds the Threshold is reported.
- See ALM-14014 NameNode GC Time Exceeds the Threshold to handle the abnormal NameService services and check whether each NameService service exception alarm is cleared.
- On the O&M > Alarm > Alarms page, check whether the alarm is cleared.
- If yes, no further action is required.
- If no, go to 13.
Collect fault information.
- On the FusionInsight Manager portal, choose O&M > Log > Download.
- Select the following nodes in the required cluster from the Service:
- ZooKeeper
- HDFS
- Click in the upper right corner, and set Start Date and End Date for log collection to 10 minutes ahead of and after the alarm generation time, respectively. Then, click Download.
- Contact the O&M personnel and send the collected logs.
Alarm Clearing
After the fault is rectified, the system automatically clears this alarm.
Related Information
None
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.