Help Center > > User Guide> Managing Active Clusters> Alarm Reference> ALM-14010 NameService Service Is Abnormal

ALM-14010 NameService Service Is Abnormal

Updated at: Mar 31, 2020 GMT+08:00

Description

The system checks the NameService service status every 180 seconds. This alarm is generated when the NameService service is unavailable and is cleared when the NameService service recovers.

Attribute

Alarm ID

Alarm Severity

Automatically Cleared

14010

Major

Yes

Parameters

Parameter

Description

ServiceName

Specifies the service for which the alarm is generated.

RoleName

Specifies the role for which the alarm is generated.

HostName

Specifies the host for which the alarm is generated.

NSName

Specifies the name of NameService for which the alarm is generated.

Impact on the System

HDFS fails to provide services for upper-layer components based on the NameService service, such as HBase and MapReduce. As a result, users cannot read or write files.

Possible Causes

  • The JournalNode is faulty.
  • The DataNode is faulty.
  • The disk capacity is insufficient.
  • The NameNode enters safe mode.

Procedure

  1. Check the status of the JournalNode instance.

    1. On the MRS cluster details page, click Components.

      For MRS 1.8.10 or earlier, log in to MRS Manager and click Services.

    2. Click HDFS.
    3. Click Instance.
    4. Check whether the Health Status of the JournalNode is Good.
      • If yes, go to 2.a.
      • If no, go to 1.e.
    5. Select the faulty JournalNode and choose More > Restart Instance. Check whether the JournalNode successfully restarts.
      • If yes, go to 1.f.
      • If no, go to 5.
    6. Wait 5 minutes and check whether the alarm is cleared.
      • If yes, no further action is required.
      • If no, go to 2.a.

  2. Check the status of the DataNode instance.

    1. On the MRS cluster details page, click Components.

      For MRS 1.8.10 or earlier, log in to MRS Manager and click Services.

    2. Click HDFS.
    3. In Operation and Health Summary, check whether the Health Status of all DataNodes is Good.
      • If yes, go to 3.a.
      • If no, go to 2.d.
    4. Click Instance. On the DataNode management page, select the faulty DataNode, and choose More > Restart Instance. Check whether the DataNode successfully restarts.
      • If yes, go to 2.e.
      • If no, go to 3.a.
    5. Wait 5 minutes and check whether the alarm is cleared.
      • If yes, no further action is required.
      • If no, go to 4.a.

  3. Check disk status.

    1. On the MRS cluster details page, click Nodes.

      For MRS 1.8.10 or earlier, log in to MRS Manager and click Hosts.

    2. In the Disk Usage column, check whether disk space is insufficient.
      • If yes, go to 3.c.
      • If no, go to 4.a.
    3. Expand the disk capacity.
    4. Wait 5 minutes and check whether the alarm is cleared.
      • If yes, no further action is required.
      • If no, go to 4.a.

  4. Check whether NameNode is in safe mode.

    1. Use the client on the cluster node, and run the hdfs dfsadmin -safemode get command to check whether Safe mode is ON is displayed.

      Information after Safe mode is ON is alarm information and is displayed based on actual conditions.

      • If yes, go to 4.b.
      • If no, go to 5.
    2. Use the client on the cluster node and run the hdfs dfsadmin -safemode leave command.
    3. Wait 5 minutes and check whether the alarm is cleared.
      • If yes, no further action is required.
      • If no, go to 5.

  5. Collect fault information.

    1. On MRS Manager, choose System > Export Log.
    2. Contact the O&M personnel and send the collected log information.

Related Information

N/A

Did you find this page helpful?

Submit successfully!

Thank you for your feedback. Your feedback helps make our documentation better.

Failed to submit the feedback. Please try again later.

Which of the following issues have you encountered?







Please complete at least one feedback item.

Content most length 200 character

Content is empty.

OK Cancel