Help Center/ MapReduce Service/ User Guide (Ankara Region)/ Alarm Reference/ ALM-18002 NodeManager Heartbeat Lost
Updated on 2024-11-29 GMT+08:00

ALM-18002 NodeManager Heartbeat Lost

Alarm Description

The system checks the number of lost NodeManager nodes every 30 seconds, and compares the number with the threshold. The Number of Lost Nodes indicator has a default threshold. The alarm is generated when the value of Number of Lost Nodes exceeds the threshold.

To change the threshold, on FusionInsight Manager, choose Cluster > Name of the desired cluster > Services > Yarn. On the displayed page, choose Configurations > All Configurations, and change the value of yarn.nodemanager.lost.alarm.threshold. You do not need to restart Yarn to make the change take effect.

The default threshold is 0. The alarm is generated when the number of lost nodes exceeds the threshold, and is cleared when the number of lost nodes is less than the threshold.

Alarm Attributes

Alarm ID

Alarm Severity

Alarm Type

Service Type

Auto Cleared

18002

Major

Error handling

Yarn

Yes

Alarm Parameters

Type

Parameter

Description

Location Information

Source

Specifies the cluster for which the alarm is generated.

ServiceName

Specifies the service for which the alarm is generated.

RoleName

Specifies the role for which the alarm is generated.

HostName

Specifies the host for which the alarm is generated.

Additional Information

Lost Host

Specifies the list of hosts with lost nodes.

Impact on the System

  • The lost NodeManager node cannot provide the Yarn service.
  • The number of containers decreases, so the cluster performance deteriorates.

Possible Causes

  • NodeManager is forcibly deleted without decommission.
  • All the NodeManager instances are stopped or the NodeManager process is faulty.
  • The host where the NodeManager node resides is faulty.
  • The network between the NodeManager and ResourceManager is disconnected or busy.

Handling Procedure

Check the NodeManager status.

  1. On the FusionInsight Manager, and choose O&M > Alarm > Alarms. Click before the alarm and obtain lost nodes in Additional Information.
  2. Check whether the lost nodes are hosts that have been manually deleted without decommission.

    • If yes, go to 3.
    • If no, go to 5.

  3. After the setting, Choose Cluster > Name of the desired cluster > Services > Yarn. On the displayed page, choose Configurations > All Configurations. Search for yarn.nodemanager.lost.alarm.threshold and change its value to the number of hosts that are not out of service and proactively deleted. After the setting, check whether the alarm is cleared.

    • If yes, no further action is required.
    • If no, go to 4.

  4. Manually clear the alarm. Note that decommission must be performed before deleting hosts.
  5. On the FusionInsight Manager portal, choose Hosts, and check whether the nodes obtained in 1 are healthy.

    • If yes, go to 7.
    • If no, go to 6.

  6. Rectify the node fault based on ALM-12006 NodeAgent Process Is Abnormal and check whether the alarm is cleared.

    • If yes, no further action is required.
    • If no, go to 7.

Check the process status.

  1. On the FusionInsight Manager, choose Cluster > Name of the desired cluster > Services > Yarn > Instance, and check whether there are NodeManager instances whose status is not Good.

    • If yes, go to 10.
    • If no, go to 8.

  2. Check whether the NodeManager instance is deleted.

    • If yes, go to 9.
    • If no, go to 11.

  3. Restart the active and standby ResourceManager instances, and check whether the alarm is cleared.

    • If yes, no further action is required.
    • If no, go to 13.

Check the instance status.

  1. Select NodeManager instances which running state is not Normal and restart them. Check whether the alarm is cleared.

    • If yes, no further action is required.
    • If no, go to 11.

Check the network status.

  1. Log in to the management node, ping the IP address of the lost NodeManager node to check whether the network is disconnected or busy.

    • If yes, go to 12.
    • If no, go to 13.

  2. Rectify the network, and check whether the alarm is cleared.

    • If yes, no further action is required.
    • If no, go to 13.

Collect fault information.

  1. On the FusionInsight Manager in the active cluster, choose O&M > Log > Download.
  2. Select Yarn in the required cluster from the Service.
  3. Click in the upper right corner, and set Start Date and End Date for log collection to 10 minutes ahead of and after the alarm generation time, respectively. Then, click Download.
  4. Contact the O&M engineers and send the collected logs.

Alarm Clearance

After the fault is rectified, the system automatically clears this alarm.

Related Information

None.