ALM-18018 NodeManager Heap Memory Usage Exceeds the Threshold
Description
The system checks the heap memory usage of Yarn NodeManager every 30 seconds and compares the actual usage with the threshold. The alarm is generated when the heap memory usage of Yarn NodeManager exceeds the threshold (95% of the maximum memory by default).
The alarm is cleared when the heap memory usage is less than or equal to the threshold.
Attribute
Alarm ID |
Alarm Severity |
Automatically Cleared |
---|---|---|
18018 |
Major |
Yes |
Parameters
Name |
Meaning |
---|---|
Source |
Specifies the cluster for which the alarm is generated. |
ServiceName |
Specifies the service name for which the alarm is generated. |
RoleName |
Specifies the role name for which the alarm is generated. |
HostName |
Specifies the object (host ID) for which the alarm is generated. |
Trigger Condition |
Specifies the threshold triggering the alarm. If the current indicator value exceeds this threshold, the alarm is generated. |
Impact on the System
When the heap memory usage of Yarn NodeManager is overhigh, the performance of Yarn task submission and operation is affected. In addition, a memory overflow may occur so that the Yarn service is unavailable.
Possible Causes
The heap memory of the Yarn NodeManager instance on the node is overused or the heap memory is inappropriately allocated. As a result, the usage exceeds the threshold.
Procedure
Check the heap memory usage.
- On the FusionInsight Manager portal, choose O&M > Alarm > Alarms > ALM-18018 NodeManager Heap Memory Usage Exceeds the Threshold > Location. Check the HostName of the instance for which the alarm is generated.
- On the FusionInsight Manager portal, choose Cluster > Name of the desired cluster > Services > Yarn > Instance > NodeManager. Click the drop-down menu in the upper right corner of Chart, choose Customize > Resource > Percentage of Used Memory of the NodeManager to check the heap memory usage.
- Check whether the used heap memory of NodeManager reaches 95% of the maximum heap memory specified for NodeManager.
- On the FusionInsight Manager portal, choose Cluster > Name of the desired cluster > Services > Yarn > Configurations > All Configurations > NodeManager > System. Increase the value of GC_OPTS parameter as required, click Save, and click OK, and restart the role instance.
The mapping between the number of NodeManager instances in a cluster and the memory size of NodeManager is as follows:
- If the number of NodeManager instances in the cluster reaches 100, the recommended JVM parameters for NodeManager instances are as follows: -Xms2G -Xmx4G -XX:NewSize=512M -XX:MaxNewSize=1G
- If the number of NodeManager instances in the cluster reaches 200, the recommended JVM parameters for NodeManager instances are as follows: -Xms4G -Xmx4G -XX:NewSize=512M -XX:MaxNewSize=1G
- If the number of NodeManager instances in the cluster reaches 500, the recommended JVM parameters for NodeManager instances are as follows: -Xms8G -Xmx8G -XX:NewSize=1G -XX:MaxNewSize=2G
- Check whether the alarm is cleared.
- If yes, no further action is required.
- If no, go to 6.
Collect fault information.
- On the FusionInsight Manager portal, choose O&M > Log > Download.
- Select the following node in the required cluster from the Service.
- NodeAgent
- Yarn
- Click in the upper right corner, and set Start Date and End Date for log collection to 10 minutes ahead of and after the alarm generation time, respectively. Then, click Download.
- Contact the O&M personnel and send the collected logs.
Alarm Clearing
After the fault is rectified, the system automatically clears this alarm.
Related Information
None
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot