Updated on 2022-08-12 GMT+08:00

ALM-14026 Blocks on DataNode Exceed the Threshold

Description

The system checks the number of blocks on each DataNode every 30 seconds. This alarm is generated when the number of blocks on the DataNode exceeds the threshold.

If the number of smoothing times is 1 and the number of blocks on the DataNode is less than or equal to the threshold, this alarm is cleared. If the number of smoothing times is greater than 1 and the number of blocks on the DataNode is less than or equal to 90% of the threshold, this alarm is cleared.

Attribute

Alarm ID

Alarm Severity

Automatically Cleared

14026

Minor

Yes

Parameters

Name

Meaning

Source

Specifies the cluster for which the alarm is generated.

ServiceName

Specifies the service for which the alarm is generated.

RoleName

Specifies the role for which the alarm is generated.

HostName

Specifies the host for which the alarm is generated.

Trigger condition

Specifies the threshold for triggering the alarm.

Impact on the System

If this alarm is reported, there are too many blocks on the DataNode. In this case, data writing into the HDFS may fail due to insufficient disk space.

Possible Causes

  • The alarm threshold is improperly configured.
  • Data skew occurs among DataNodes.
  • The disk space configured for the HDFS cluster is insufficient.

Procedure

Modify the threshold

  1. On the FusionInsight Manager portal, choose Cluster > Name of the desired cluster > Services > HDFS > Configurations > All Configurations. On the displayed page, find the GC_OPTS parameter under HDFS->DataNode.
  2. Set the threshold of the DataNode block number. Specifically, modify the value of Xmx of the GC_OPTS parameter. Xmx specifies the memory, and each GB memory supports a maximum of 500000 DataNode blocks. Set the memory as required. Confirm that GC_PROFILE is set to custom and save the configuration.
  3. Choose Cluster > Name of the desired cluster > HDFS > Instance, select the DataNode instance whose Configuration Status is Expired, and choose More > Restart Instance to make the GC_OPTS configuration take effect.
  4. Five minutes later, check whether the alarm is cleared.

    • If yes, no further action is required.
    • If no, go to 5.

Check whether associated alarms are reported.

  1. On the FusionInsight Manager portal, choose O&M > Alarm > Alarms, and check whether ALM-14002 DataNode Disk Usage Exceeds the Threshold is reported.

    • If yes, go to 6.
    • If no, go to 8.

  2. Rectify the fault by referring to ALM-14002 DataNode Disk Usage Exceeds the Threshold. Then, check whether the alarm is cleared.

    • If yes, go to 7.
    • If no, go to 8.

  3. Five minutes later, check whether the alarm is cleared.

    • If yes, no further action is required.
    • If no, go to 8.

Expand the DataNode capacity.

  1. Expand the DataNode capacity.
  2. On the FusionInsight Manager portal, five minutes later, check whether the alarm is cleared.

    • If yes, no further action is required.
    • If no, go to 10.

Collect fault information.

  1. On the FusionInsight Manager portal, choose O&M > Log > Download.
  2. Select HDFS in the required cluster from the Service.
  3. Click in the upper right corner, and set Start Date and End Date for log collection to 20 minutes ahead of and after the alarm generation time, respectively. Then, click Download.
  4. Contact the O&M personnel and send the collected logs.

Alarm Clearing

After the fault is rectified, the system automatically clears this alarm.

Related Information

Configuration rules of the DataNode JVM parameter

Default value of the DataNode JVM parameter GC_OPTS:

-Xms2G -Xmx4G -XX:NewSize=128M -XX:MaxNewSize=256M -XX:MetaspaceSize=128M -XX:MaxMetaspaceSize=128M -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:CMSInitiatingOccupancyFraction=65 -XX:+PrintGCDetails -Dsun.rmi.dgc.client.gcInterval=0x7FFFFFFFFFFFFFE -Dsun.rmi.dgc.server.gcInterval=0x7FFFFFFFFFFFFFE -XX:-OmitStackTraceInFastThrow -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=1M -Djdk.tls.ephemeralDHKeySize=2048

The average number of blocks stored in each DataNode instance in the cluster is: HDFS Block x 3 ÷ Number of DataNodes. If the average number changes, you need to change -Xms2G -Xmx4G -XX:NewSize=128M -XX:MaxNewSize=256M in the default value. The following table lists the reference values.

Table 1 DataNode JVM configuration

Average Number of Blocks in a DataNode Instance

Reference Value

2,000,000

-Xms6G -Xmx6G -XX:NewSize=512M -XX:MaxNewSize=512M

5,000,000

-Xms12G -Xmx12G -XX:NewSize=1G -XX:MaxNewSize=1G

Xmx specifies the memory, and each GB memory supports a maximum of 500000 DataNode blocks. Set the memory as required.