HDFS Client Failed to Delete Overlong Directories
Symptom
When a user runs the hadoop fs -rm -r -f obs://<obs_path> command to delete an OBS directory with an overlong path name, the following error message is displayed:
2022-02-28 17:12:45,605 INFO internal.RestStorageService: OkHttp cost 19 ms to apply http request 2022-02-28 17:12:45,606 WARN internal.RestStorageService: Request failed, Response code: 400; Request ID: 0000017F3F9A8545401491602FC8CAD9; Request path: http://wordcount01-fcq.obs.xxx.xxxx.xxx.com/user%2Froot%2F.Trash%2FCurrent%2Ftest1%2F12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456 2022-02-28 17:12:45,606 WARN services.AbstractClient: Storage|1|HTTP+XML|getObjectMetadata||||2022-02-28 17:12:45|2022-02-28 17:12:45|||400| 2022-02-28 17:12:45,607 INFO log.AccessLogger: 2022-02-28 17:12:45 605|com.obs.services.internal.RestStorageService|executeRequest|560|OkHttp cost 19 ms to apply http request 2022-02-28 17:12:45 606|com.obs.services.internal.RestStorageService|handleThrowable|221|Request failed, Response code: 400; Request ID: 0000017F3F9A8545401491602FC8CAD9; Request path: http://wordcount01-fcq.obs.xxx.xxxxx.xxx.com/user%2Froot%2F.Trash%2FCurrent%2Ftest1%2F12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456 2022-02-28 17:12:45 606|com.obs.services.AbstractClient|doActionWithResult|404|Storage|1|HTTP+XML|getObjectMetadata||||2022-02-28 17:12:45|2022-02-28 17:12:45|||400|
Cause Analysis
When you run the rm command to delete some content from the HDFS, the files or directories are not deleted immediately. Instead, they are moved to the current directory (/user/${Username}/.Trash/current) in the recycle bin.
Solution
You can run the skipTrash command to skip the HDFS recycle bin and directly delete the data. Set the dfs.client.skipTrash.enabled=true configuration item of the HDFS client.
- Log in to any master node in the cluster as user root.
- Run the following command to edit the hdfs-site.xml file used by HDFS:
vim Client installation directory/HDFS/hadoop/etc/hadoop/hdfs-site.xml
- Add the following content to the hdfs-site.xml file:
<property> <name>dfs.client.skipTrash.enabled</name> <value>true</value> </property>
- Run the following command to delete the overlong OBS directory:
hadoop fs -rm -r -f –skipTrash obs://<obs_path>
- Log in to the other master nodes in the cluster and repeat 2 to 4 one by one until the operations are complete on all master nodes.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot