ALM-18011 NodeManager GC Time Exceeds the Threshold
Description
The system checks the garbage collection (GC) duration of the NodeManager process every 60 seconds. This alarm is generated when the GC duration exceeds the threshold (12 seconds by default).
This alarm is cleared when the GC duration is less than the threshold.
Attribute
Alarm ID |
Alarm Severity |
Automatically Cleared |
---|---|---|
18011 |
Major |
Yes |
Parameters
Name |
Meaning |
---|---|
Source |
Specifies the cluster for which the alarm is generated. |
ServiceName |
Specifies the service for which the alarm is generated. |
RoleName |
Specifies the role for which the alarm is generated. |
HostName |
Specifies the host for which the alarm is generated. |
Trigger Condition |
Specifies the threshold triggering the alarm. If the current indicator value exceeds this threshold, the alarm is generated. |
Impact on the System
A long GC duration of the NodeManager process may interrupt the services.
Possible Causes
The heap memory of the NodeManager instance is overused or the heap memory is inappropriately allocated. As a result, GCs occur frequently.
Procedure
Check the GC duration.
- On FusionInsight Manager, choose O&M > Alarm > Alarms > ALM-18011 NodeManager GC Time Exceeds the Threshold > Location. View the IP address of the alarmed instance.
- On the Home page of FusionInsight Manager, choose Cluster > Name of the target cluster > Services > Yarn. On the page that is displayed, click the Instance tab. In the instance list, select NodeManager (IP address of the instance for which this alarm is generated). Click the drop-down list in the upper right corner of the chart, choose Customize > Garbage Collection, and select Garbage Collection (GC) Time of NodeManager. Check the GC duration statistics of the NodeManager process every minute.
Figure 1 Garbage Collection (GC) Time of NodeManager
- Check whether the GC duration of the NodeManager process collected every minute exceeds the threshold (12 seconds by default).
- On the FusionInsight Manager portal, choose Cluster > Name of the desired cluster > Services > Yarn > Configurations > All Configurations > NodeManager > System. Increase the value of the GC_OPTS parameter as required.
The mapping between the number of NodeManager instances in a cluster and the memory size of NodeManager is as follows:
- If the number of NodeManager instances in the cluster reaches 100, the recommended JVM parameters for NodeManager instances are as follows: -Xms2G -Xmx4G -XX:NewSize=512M -XX:MaxNewSize=1G
- If the number of NodeManager instances in the cluster reaches 200, the recommended JVM parameters for NodeManager instances are as follows: -Xms4G -Xmx4G -XX:NewSize=512M -XX:MaxNewSize=1G
- If the number of NodeManager instances in the cluster reaches 500, the recommended JVM parameters for NodeManager instances are as follows: -Xms8G -Xmx8G -XX:NewSize=1G -XX:MaxNewSize=2G
- Save the configuration and restart the NodeManager instance.
During NodeManager restart, containers submitted to this node may be retried to other nodes.
- Check whether the alarm is cleared.
- If yes, no further action is required.
- If no, go to Step 7.
Collect fault information.
- On the FusionInsight Manager portal, choose O&M > Log > Download.
- Select NodeManager in the required cluster from the Service.
- Click
in the upper right corner, and set Start Date and End Date for log collection to 10 minutes ahead of and after the alarm generation time, respectively. Then, click Download.
- Contact the O&M personnel and send the collected logs.
Alarm Clearing
After the fault is rectified, the system automatically clears this alarm.
Related Information
None
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot