Help Center/ MapReduce Service/ User Guide (Ankara Region)/ Alarm Reference/ ALM-18011 NodeManager GC Time Exceeds the Threshold
Updated on 2024-11-29 GMT+08:00

ALM-18011 NodeManager GC Time Exceeds the Threshold

Alarm Description

The system checks the garbage collection (GC) duration of the NodeManager process every 60 seconds. This alarm is generated when the GC duration exceeds the threshold.

This alarm is cleared when the GC duration is less than the threshold.

Alarm Attributes

Alarm ID

Alarm Severity

Alarm Type

Service Type

Auto Cleared

18011

Critical (default threshold: 20000ms)

Major (default threshold: 12000ms)

Quality of service

Yarn

Yes

Alarm Parameters

Type

Parameter

Description

Location Information

Source

Specifies the cluster for which the alarm is generated.

ServiceName

Specifies the service for which the alarm is generated.

RoleName

Specifies the role for which the alarm is generated.

HostName

Specifies the host for which the alarm is generated.

Additional Information

Trigger Condition

Specifies the threshold triggering the alarm. If the current indicator value exceeds this threshold, the alarm is generated.

Impact on the System

A long GC duration of the NodeManager process may interrupt the services.

Possible Causes

The heap memory of the NodeManager instance is overused or the heap memory is inappropriately allocated. As a result, GCs occur frequently.

Handling Procedure

Check the GC duration.

  1. On the FusionInsight Manager portal, choose O&M > Alarm > Alarms > ALM-18011 NodeManager GC Time Exceeds the Threshold > Location to check the IP address of the instance for which the alarm is generated.
  2. On the FusionInsight Manager portal, choose Cluster > Name of the desired cluster > Services > Yarn > Instance > NodeManager (IP address for which the alarm is generated). Click the drop-down menu in the upper right corner of Chart, choose Customize > Garbage Collection (GC) Time of NodeManager to check the GC duration statistics of the Broker process collected every minute.
  3. Check whether the GC duration of the NodeManager process collected every minute exceeds the threshold.

    • If yes, go to 4.
    • If no, go to 7.

  4. On the FusionInsight Manager portal, choose Cluster > Name of the desired cluster > Services > Yarn > Configurations > All Configurations > NodeManager > System to increase the value of GC_OPTS parameter as required.

    The mapping between the number of NodeManager instances in a cluster and the memory size of NodeManager is as follows:

    • If the number of NodeManager instances in the cluster reaches 100, the recommended JVM parameters for NodeManager instances are as follows: -Xms2G -Xmx4G -XX:NewSize=512M -XX:MaxNewSize=1G
    • If the number of NodeManager instances in the cluster reaches 200, the recommended JVM parameters for NodeManager instances are as follows: -Xms4G -Xmx4G -XX:NewSize=512M -XX:MaxNewSize=1G
    • If the number of NodeManager instances in the cluster reaches 500, the recommended JVM parameters for NodeManager instances are as follows: -Xms8G -Xmx8G -XX:NewSize=1G -XX:MaxNewSize=2G

  5. Save the configuration and restart the NodeManager instance.
  6. Check whether the alarm is cleared.

    • If yes, no further action is required.
    • If no, go to 7.

Collect fault information.

  1. On the FusionInsight Manager portal, choose O&M > Log > Download.
  2. Select NodeManager in the required cluster from the Service.
  3. Click edit icon in the upper right corner, and set Start Date and End Date for log collection to 10 minutes ahead of and after the alarm generation time, respectively. Then, click Download.
  4. Contact the O&M engineers and send the collected logs.

Alarm Clearance

After the fault is rectified, the system automatically clears this alarm.

Related Information

None.