Help Center/ MapReduce Service/ User Guide (Ankara Region)/ Alarm Reference/ ALM-14018 NameNode Non-heap Memory Usage Exceeds the Threshold
Updated on 2024-11-29 GMT+08:00

ALM-14018 NameNode Non-heap Memory Usage Exceeds the Threshold

Alarm Description

The system checks the non-heap memory usage of the HDFS NameNode every 30 seconds and compares the actual usage with the threshold. The non-heap memory usage of the HDFS NameNode has a default threshold. This alarm is generated when the non-heap memory usage of the HDFS NameNode exceeds the threshold.

Users can choose O&M > Alarm > Thresholds > Name of the desired cluster > HDFS to change the threshold.

This alarm is cleared when the no-heap memory usage of the HDFS NameNode is less than or equal to the threshold.

Alarm Attributes

Alarm ID

Alarm Severity

Alarm Type

Service Type

Auto Cleared

14018

Critical (default threshold: 95%)

Major (default threshold: 90%)

Quality of service

HDFS

Yes

Alarm Parameters

Type

Parameter

Description

Location Information

Source

Specifies the cluster for which the alarm is generated.

ServiceName

Specifies the service for which the alarm is generated.

RoleName

Specifies the role for which the alarm is generated.

HostName

Specifies the host for which the alarm is generated.

Additional Information

Trigger Condition

Specifies the threshold triggering the alarm. If the current indicator value exceeds this threshold, the alarm is generated.

Impact on the System

If the memory usage of the HDFS NameNode is too high, data read/write performance of HDFS will be affected.

Possible Causes

Non-heap memory of the HDFS NameNode is insufficient.

Handling Procedure

Delete unnecessary files.

  1. Log in to the HDFS client as user root. Run the cd command to go to the client installation directory, and run the source bigdata_env command.

    If the cluster adopts the security mode, perform security authentication.

    Run the kinit hdfs command and enter the password as prompted. Obtain the password from the administrator.

  2. Run the hdfs dfs -rm -r file or directory path command to delete unnecessary files.
  3. Check whether the alarm is cleared.

    • If yes, no further action is required.
    • If no, go to 4.

Check the NameNode JVM non-heap memory usage and configuration.

  1. On the FusionInsight Manager portal, choose Cluster > Name of the desired cluster > Services > HDFS. The HDFS status page is displayed.
  2. In the Basic Information area, click NameNode(Active). The HDFS WebUI is displayed.

    By default, the admin user does not have the permissions to manage other components. If the page cannot be opened or the displayed content is incomplete when you access the native UI of a component due to insufficient permissions, you can manually create a user with the permissions to manage that component.

  3. On the HDFS WebUI, click the Overview tab. In Summary, check the numbers of files, directories, and blocks in HDFS.
  4. On the FusionInsight Manager portal, choose Cluster > Name of the desired cluster > Services > HDFS > Configurations > All Configurations. In Search, enter GC_OPTS to check the GC_OPTS non-heap memory parameter of HDFS->NameNode.

Adjust system configurations.

  1. Check whether the non-heap memory is properly configured based on the number of file objects in 6 and the non-heap parameters configured for NameNode in 7.

    • If yes, go to 9.
    • If no, go to 12.

    The recommended mapping between the number of HDFS file objects (filesystem objects = files + blocks) and the JVM parameters configured for NameNode is as follows:

    • If the number of file objects reaches 10,000,000, you are advised to set the JVM parameters as follows: -Xms6G -Xmx6G -XX:NewSize=512M -XX:MaxNewSize=512M
    • If the number of file objects reaches 20,000,000, you are advised to set the JVM parameters as follows: -Xms12G -Xmx12G -XX:NewSize=1G -XX:MaxNewSize=1G
    • If the number of file objects reaches 50,000,000, you are advised to set the JVM parameters as follows: -Xms32G -Xmx32G -XX:NewSize=3G -XX:MaxNewSize=3G
    • If the number of file objects reaches 100,000,000, you are advised to set the JVM parameters as follows: -Xms64G -Xmx64G -XX:NewSize=6G -XX:MaxNewSize=6G
    • If the number of file objects reaches 200,000,000, you are advised to set the JVM parameters as follows: -Xms96G -Xmx96G -XX:NewSize=9G -XX:MaxNewSize=9G
    • If the number of file objects reaches 300,000,000, you are advised to set the JVM parameters as follows: -Xms164G -Xmx164G -XX:NewSize=12G -XX:MaxNewSize=12G

  2. Modify the GC_OPTS parameter of the NameNode based on the mapping between the number of file objects and non-heap memory.
  3. Save the configuration and click Dashboard > More > Restart Service.
  4. Check whether the alarm is cleared.

    • If yes, no further action is required.
    • If no, go to 12.

Collect fault information.

  1. On the FusionInsight Manager portal, choose O&M > Log > Download.
  2. Select the following services in the required cluster from the Service.

    • ZooKeeper
    • HDFS

  3. Click in the upper right corner, and set Start Date and End Date for log collection to 10 minutes ahead of and after the alarm generation time, respectively. Then, click Download.
  4. Contact the O&M engineers and send the collected logs.

Alarm Clearance

After the fault is rectified, the system automatically clears this alarm.

Related Information

None.