更新时间:2022-02-24 GMT+08:00
文件错误导致上传文件到HDFS失败
问题背景与现象
用hadoop dfs -put把本地文件拷贝到HDFS上,有报错。
上传部分文件后,报错失败,从NameNode原生页面看,临时文件大小不再变化。
原因分析
- 查看NameNode日志“/var/log/Bigdata/hdfs/nn/hadoop-omm-namenode-主机名.log”,发现该文件一直在被尝试写,直到最终失败。
2015-07-13 10:05:07,847 | WARN | org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@36fea922 | DIR* NameSystem.internalReleaseLease: Failed to release lease for file /hive/order/OS_ORDER._8.txt._COPYING_. Committed blocks are waiting to be minimally replicated. Try again later. | FSNamesystem.java:3936 2015-07-13 10:05:07,847 | ERROR | org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@36fea922 | Cannot release the path /hive/order/OS_ORDER._8.txt._COPYING_ in the lease [Lease. Holder: DFSClient_NONMAPREDUCE_-1872896146_1, pendingcreates: 1] | LeaseManager.java:459 org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: DIR* NameSystem.internalReleaseLease: Failed to release lease for file /hive/order/OS_ORDER._8.txt._COPYING_. Committed blocks are waiting to be minimally replicated. Try again later. at FSNamesystem.internalReleaseLease(FSNamesystem.java:3937)
- 根因分析:被上传的文件损坏,因此会上传失败。
- 验证办法:cp或者scp被拷贝的文件,也会失败,确认文件本身已损坏。
解决办法
- 文件本身损坏造成的此问题,采用正常文件进行上传。
父主题: 使用HDFS