ALM-14003 Number of Lost HDFS Blocks Exceeds the Threshold
Alarm Description
The system checks the lost blocks every 30 seconds and compares the actual lost blocks with the threshold. The lost blocks indicator has a default threshold. This alarm is generated when the number of lost HDFS blocks exceeds the threshold.
To change the threshold, choose O&M > Alarm > Thresholds > Name of the desired cluster > HDFS.
If Trigger Count is 1, this alarm is cleared when the value of lost HDFS blocks is less than or equal to the threshold. If Trigger Count is greater than 1, this alarm is cleared when the value of lost HDFS blocks is less than or equal to 90% of the threshold.
Alarm Attributes
Alarm ID |
Alarm Severity |
Auto Cleared |
---|---|---|
14003 |
Major
NOTE:
The alarm severity in MRS 3.1.5 is Critical. |
Yes |
Alarm Parameters
Parameter |
Description |
---|---|
Source |
Specifies the cluster for which the alarm was generated. |
ServiceName |
Specifies the service for which the alarm was generated. |
RoleName |
Specifies the role for which the alarm was generated. |
HostName |
Specifies the host for which the alarm was generated. |
NameServiceName |
Specifies the NameService for which the alarm was generated. |
Trigger Condition |
Specifies the threshold for triggering the alarm. |
Impact on the System
Data stored in HDFS is lost. HDFS may enter the security mode and cannot provide write services. Lost block data cannot be restored.
Possible Causes
- The DataNode instance is abnormal.
- Data is deleted.
Handling Procedure
Check the DataNode instance.
- On FusionInsight Manager, choose Cluster > Name of the desired cluster > Services > HDFS > Instance.
- Check whether the Running Status of all DataNode instance is Normal.
- Restart the DataNode instance and check whether the DataNode instance restarts successfully.
- Choose O&M > Alarm > Alarms and check whether the alarm is cleared.
- If yes, no further action is required.
- If no, go to 5.
Delete the damaged file.
- On FusionInsight Manager, choose Cluster > Name of the desired cluster > Services > HDFS > NameNode(Active). On the WebUI page of the HDFS, view the information about lost blocks.
- If a block is lost, a line in red is displayed on the WebUI.
- By default, the admin user does not have the permissions to manage other components. If the page cannot be opened or the displayed content is incomplete when you access the native UI of a component due to insufficient permissions, you can manually create a user with the permissions to manage that component.
- The user checks whether the file containing the lost data block is useful.
Files generated in directories /mr-history, /tmp/hadoop-yarn, and /tmp/logs during MapReduce task execution are unnecessary.
- The user checks whether the file containing the lost data block is backed up.
- Log in to the HDFS client as user root. The user password is defined by the user before the installation. Contact the MRS cluster administrator to obtain the password. Run the following commands:
- On the node client, run hdfs fsck / -delete to delete the lost file. If the file where the lost block is located is a useful file, you need to write the file again to restore the data.
Deleting a file or folder is a high-risk operation. Ensure that the file or folder is no longer required before performing this operation.
- Choose O&M > Alarm > Alarms and check whether the alarm is cleared.
- If yes, no further action is required.
- If no, go to 11.
Collect the fault information.
- On FusionInsight Manager, choose O&M > Log > Download.
- Expand the drop-down list next to the Service field. In the Services dialog box that is displayed, select HDFS for the target cluster.
- Click in the upper right corner, and set Start Date and End Date for log collection to 10 minutes ahead of and after the alarm generation time, respectively. Then, click Download.
- Contact O&M personnel and provide the collected logs.
Alarm Clearance
This alarm is automatically cleared after the fault is rectified.
Related Information
None
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.