ALM-14013 Failed to Update the NameNode FsImage File
Description
HDFS metadata is stored in the FsImage file of the NameNode data directory, which is specified by the dfs.namenode.name.dir configuration item. The standby NameNode periodically combines existing FsImage files and Editlog files stored in the JournalNode to generate a new FsImage file, and then pushes the new FsImage file to the data directory of the active NameNode. This period is specified by the dfs.namenode.checkpoint.period configuration item of HDFS. The default value is 3600s, namely, one hour. If the FsImage file in the data directory of the active NameNode is not updated, the HDFS metadata combination function is abnormal and requires rectification.
On the active NameNode, the system checks the FsImage file information every five minutes. This alarm is generated when no FsImage file is generated within three combination periods.
This alarm is cleared when a new FsImage file is generated and pushed to the active NameNode, which indicates that the HDFS metadata combination function can be properly used.
Attribute
Alarm ID |
Alarm Severity |
Automatically Cleared |
---|---|---|
14013 |
Major |
Yes |
Parameters
Name |
Meaning |
---|---|
Source |
Specifies the cluster for which the alarm is generated. |
ServiceName |
Specifies the service for which the alarm is generated. |
RoleName |
Specifies the role for which the alarm is generated. |
HostName |
Specifies the host for which the alarm is generated. |
NameServiceName |
Specifies the NameService for which the alarm is generated. |
Impact on the System
If the FsImage file in the data directory of the active NameNode is not updated, the HDFS metadata combination function is abnormal and requires rectification. If it is not rectified, the Editlog files increase continuously after HDFS runs for a period. In this case, HDFS restart is time-consuming because a large number of Editlog files need to be loaded. In addition, this alarm also indicates that the standby NameNode is abnormal and the NameNode high availability (HA) mechanism becomes invalid. When the active NameNode is faulty, the HDFS service becomes unavailable.
Possible Causes
- The standby NameNode is stopped.
- The standby NameNode instance is working incorrectly.
- The standby NameNode fails to generate a new FsImage file.
- Space of the data directory on the standby NameNode is insufficient.
- The standby NameNode fails to push the FsImage file to the active NameNode.
- Space of the data directory on the active NameNode is insufficient.
Procedure
Check whether the standby NameNode is stopped.
- On the FusionInsight Manager portal, choose O&M > Alarm > Alarms. In the alarm list, click the alarm.
- View Location and obtain the host name of the active NameNode for which the alarm is generated and name of the NameService where the active NameNode resides.
- Choose Cluster > Name of the desired cluster > Services > HDFS > Instance, find the standby NameNode instance of the NameService in the instance list, and check whether its Configuration Status is Synchronized.
- Select the standby NameNode instance, choose Start Instance, and wait until the startup is complete.
- Wait for a NameNode metadata combination period and check whether the alarm is cleared.
- If yes, no further action is required.
- If no, go to 6.
Check whether the NameNode instance is working correctly.
- Check whether Running Status of the standby NameNode instance is Normal.
- Select the standby NameNode instance, choose More > Restart Instance, and wait until the startup is complete.
- Wait for a NameNode metadata combination period and check whether the alarm is cleared.
- If yes, no further action is required.
- If no, go to 30.
Check whether the standby NameNode fails to generate a new FsImage file.
- On the FusionInsight Manager portal, choose Cluster > Name of the desired cluster > Services > HDFS > Configurations > All Configurations, and search and obtain the value of dfs.namenode.checkpoint.period. This value is the period of NameNode metadata combination.
- Choose Cluster > Name of the desired cluster > Services > HDFS > Instance and obtain the service IP addresses of the active and standby NameNodes of the NameService for which the alarm is generated.
- Click the NameNode(xx,Standy) and Instance Configurations to obtain the value of dfs.namenode.name.dir. This value is the FsImage storage directory of the standby NameNode.
- Log in to the standby NameNode as user root or omm.
- Go to the FsImage storage directory and check the generation time of the newest FsImage file.
cd Storage directory of the standby NameNode/current
stat -c %y $(ls -t | grep "fsimage_[0-9]*$" | head -1)
- Run the date command to obtain the current system time.
- Calculate the time difference between the generation time of the newest FsImage file and the current system time and check whether the time difference is greater than three times of the metadata combination period.
- The metadata combination function of the standby NameNode is faulty. Run the following command to check whether the fault is caused by insufficient storage space.
Go to the FsImage storage directory and check the size of the newest FsImage file (in MB).
cd Storage directory of the standby NameNode/current
du -m $(ls -t | grep "fsimage_[0-9]*$" | head -1) | awk '{print $1}'
- Run the following command to check the available disk space of the standby NameNode (in MB).
df -m ./ | awk 'END{print $4}'
- Compare the FsImage file size and the available disk space and determine whether another FsImage file can be stored on the disk.
- Clear the redundant files on the disk where the directory resides to reserve sufficient space for metadata. After the clearance, wait for a NameNode metadata combination period and check whether the alarm is cleared.
- If yes, no further action is required.
- If no, go to 20.
Check whether the standby NameNode fails to push the FsImage file to the active NameNode.
- Log in to the standby NameNode as user root.
- Run the su - omm command to switch to user omm.
- Run the following command to check whether the standby NameNode can push the file to the active NameNode.
tmpFile=/tmp/tmp_test_$(date +%s)
echo "test" > $tmpFile
scp $tmpFile Service IP address of the active NameNode:/tmp
- When the standby NameNode fails to push data to the active NameNode as user omm, contact the system administrator to handle the fault. Wait for a NameNode metadata combination period and check whether the alarm is cleared.
- If yes, no further action is required.
- If no, go to 24.
Check whether space on the data directory of the active NameNode is insufficient.
- On the FusionInsight Manager portal, choose Cluster > Name of the desired cluster > Services > HDFS > Instance, click the active NameNode of the NameService for which the alarm is generated, and then click Instance Configurations to obtain the value of dfs.namenode.name.dir. This value is the FsImage storage directory of the active NameNode.
- Log in to the active NameNode as user root or omm.
- Go to the FsImage storage directory and check the size of the newest FsImage file (in MB).
cd Storage directory of the active NameNode/current
du -m $(ls -t | grep "fsimage_[0-9]*$" | head -1) | awk '{print $1}'
- Run the following command to check the available disk space of the active NameNode (in MB).
df -m ./ | awk 'END{print $4}'
- Compare the FsImage file size and the available disk space and determine whether another FsImage file can be stored on the disk.
- Clear the redundant files on the disk where the directory resides to reserve sufficient space for metadata. After the clearance, wait for a NameNode metadata combination period and check whether the alarm is cleared.
- If yes, no further action is required.
- If no, go to 30.
Collect fault information.
- On the FusionInsight Manager portal, choose O&M > Log > Download.
- Select NameNode in the required cluster from the Service.
- Click in the upper right corner, and set Start Date and End Date for log collection to 30 minutes ahead of and after the alarm generation time, respectively. Then, click Download.
- Contact the O&M personnel and send the collected logs.
Alarm Clearing
After the fault is rectified, the system automatically clears this alarm.
Related Information
None
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.