Help Center/ MapReduce Service/ User Guide (Paris Region)/ Troubleshooting/ Using HDFS/ A Large Number of Blocks Are Lost in HDFS due to the Time Change Using ntpdate
Updated on 2024-10-11 GMT+08:00

A Large Number of Blocks Are Lost in HDFS due to the Time Change Using ntpdate

Symptom

  1. A user uses ntpdate to change the time for a cluster that is not stopped. After the time is changed, HDFS enters the safe mode and cannot be started.
  2. After the system exits the safe mode and starts, about 1 TB data is lost during the hfck check.

Cause Analysis

  1. A large number of blocks are lost on the native NameNode page.
    Figure 1 Block loss
  2. DataNode information on the native page shows that the number of displayed DataNode nodes is 10 less than that of actual DataNode nodes.
    Figure 2 Checking the number of DataNodes
  3. Check the DataNode run log file /var/log/Bigdata/hdfs/dn/hadoop-omm-datanode-hostname.log. The following error information is displayed:

    Major error information: Clock skew too great

    Figure 3 DateNode run log error

Solution

  1. Change the time of the 10 DataNodes that cannot be viewed on the native page.
  2. On Manager, restart the DataNode instances.