Help Center/ MapReduce Service/ User Guide/ MRS Cluster O&M/ MRS Cluster Alarm Handling Reference/ ALM-14010 NameService Is Abnormal (For MRS 2.x or Earlier)
Updated on 2024-09-23 GMT+08:00

ALM-14010 NameService Is Abnormal (For MRS 2.x or Earlier)

Description

The system checks the NameService service status every 180 seconds. This alarm is generated when the NameService service is unavailable.

This alarm is cleared when the NameService service recovers.

Attribute

Alarm ID

Alarm Severity

Auto Clear

14010

Major

Yes

Parameters

Parameter

Description

ServiceName

Specifies the service for which the alarm is generated.

RoleName

Specifies the role for which the alarm is generated.

HostName

Specifies the host for which the alarm is generated.

NSName

Specifies the NameService service for which the alarm is generated.

Impact on the System

HDFS fails to provide services for upper-layer components based on the NameService service, such as HBase and MapReduce. As a result, users cannot read or write files.

Possible Causes

  • The JournalNode is faulty.
  • The DataNode is faulty.
  • The disk capacity is insufficient.
  • The NameNode enters safe mode.

Procedure

  1. Check the status of the JournalNode instance.

    1. On the MRS Manager home page, click Components.
    2. Click HDFS.
    3. Click Instance.
    4. Check whether the Health Status of the JournalNode is Good.
      • If yes, go to 2.a.
      • If no, go to 1.e.
    5. Select the faulty JournalNode, and choose More > Restart Instance. Check whether the JournalNode successfully restarts.
      • If yes, go to 1.f.
      • If no, go to 5.
    6. Wait 5 minutes and check whether the alarm is cleared.
      • If yes, no further action is required.
      • If no, go to 2.a.

  2. Check the status of the DataNode instance.

    1. On the MRS cluster details page, click Components.
    2. Click HDFS.
    3. In Operation and Health Summary, check whether the Health Status of all DataNodes is Good.
      • If yes, go to 3.a.
      • If no, go to 2.d.
    4. Click Instances. On the DataNode management page, select the faulty DataNode, and choose More > Restart Instance. Check whether the DataNode successfully restarts.
      • If yes, go to 2.e.
      • If no, go to 3.a.
    5. Wait 5 minutes and check whether the alarm is cleared.
      • If yes, no further action is required.
      • If no, go to 4.a.

  3. Check the disk status.

    1. On the MRS cluster details page, click the Nodes tab and expand a node group.
    2. In the Disk Usage column, check whether disk space is insufficient.
      • If yes, go to 3.c.
      • If no, go to 4.a.
    3. Expand the disk capacity.
    4. Wait 5 minutes and check whether the alarm is cleared.
      • If yes, no further action is required.
      • If no, go to 4.a.

  4. Check whether NameNode is in the safe mode.

    1. Use the client on the cluster node, and run the hdfs dfsadmin -safemode get command to check whether Safe mode is ON is displayed.

      Information behind Safe mode is ON is alarm information and is displayed based actual conditions.

      • If yes, go to 4.b.
      • If no, go to 5.
    2. Use the client on the cluster node and run the hdfs dfsadmin -safemode leave command.
    3. Wait 5 minutes and check whether the alarm is cleared.
      • If yes, no further action is required.
      • If no, go to 5.

  5. Collect fault information.

    1. On MRS Manager, choose System > Export Log.
    2. Contact the O&M engineers and send the collected logs.

Reference

None