On this page
Help Center/ MapReduce Service/ Troubleshooting/ Using HDFS/ File Fails to Be Uploaded to HDFS Due to File Errors

File Fails to Be Uploaded to HDFS Due to File Errors

Updated on 2022-09-14 GMT+08:00

Symptom

The hadoop dfs -put command is used to copy local files to HDFS.

After some files are uploaded, an error occurs. The size of the temporary files no long changes on the native NameNode page.

Cause Analysis

  1. Check the NameNode log /var/log/Bigdata/hdfs/nn/hadoop-omm-namenode-hostname.log. It is found that the file is being written until a failure occurs.
    2015-07-13 10:05:07,847 | WARN  | org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@36fea922 | DIR* NameSystem.internalReleaseLease: Failed to release lease for file /hive/order/OS_ORDER._8.txt._COPYING_. Committed blocks are waiting to be minimally replicated. Try again later. | FSNamesystem.java:3936
    2015-07-13 10:05:07,847 | ERROR | org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@36fea922 | Cannot release the path /hive/order/OS_ORDER._8.txt._COPYING_ in the lease [Lease.  Holder: DFSClient_NONMAPREDUCE_-1872896146_1, pendingcreates: 1] | LeaseManager.java:459
    org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: DIR* NameSystem.internalReleaseLease: Failed to release lease for file /hive/order/OS_ORDER._8.txt._COPYING_. Committed blocks are waiting to be minimally replicated. Try again later.
    at FSNamesystem.internalReleaseLease(FSNamesystem.java:3937)
  2. Root cause: The uploaded files are damaged.
  3. Verification: The cp or scp operation fails to be performed for the copied files. Therefore, the files are damaged.

Solution

  1. Upload normal files.
Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback