Help Center/
MapReduce Service/
User Guide (Ankara Region)/
Troubleshooting/
Using HDFS/
File Fails to Be Uploaded to HDFS Due to File Errors
Updated on 2024-11-29 GMT+08:00
File Fails to Be Uploaded to HDFS Due to File Errors
Symptom
The hadoop dfs -put command is used to copy local files to HDFS.
After some files are uploaded, an error occurs. The size of the temporary files no long changes on the native NameNode page.
Cause Analysis
- Check the NameNode log /var/log/Bigdata/hdfs/nn/hadoop-omm-namenode-hostname.log. It is found that the file is being written until a failure occurs.
2015-07-13 10:05:07,847 | WARN | org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@36fea922 | DIR* NameSystem.internalReleaseLease: Failed to release lease for file /hive/order/OS_ORDER._8.txt._COPYING_. Committed blocks are waiting to be minimally replicated. Try again later. | FSNamesystem.java:3936 2015-07-13 10:05:07,847 | ERROR | org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@36fea922 | Cannot release the path /hive/order/OS_ORDER._8.txt._COPYING_ in the lease [Lease. Holder: DFSClient_NONMAPREDUCE_-1872896146_1, pendingcreates: 1] | LeaseManager.java:459 org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: DIR* NameSystem.internalReleaseLease: Failed to release lease for file /hive/order/OS_ORDER._8.txt._COPYING_. Committed blocks are waiting to be minimally replicated. Try again later. at FSNamesystem.internalReleaseLease(FSNamesystem.java:3937)
- Root cause: The uploaded files are damaged.
- Verification: The cp or scp operation fails to be performed for the copied files. Therefore, the files are damaged.
Solution
- Upload normal files.
Parent topic: Using HDFS
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
The system is busy. Please try again later.
For any further questions, feel free to contact us through the chatbot.
Chatbot