Help Center/
MapReduce Service/
User Guide (Paris Region)/
Troubleshooting/
Using HDFS/
Failed to Write Files to HDFS, and "item limit of / is exceeded" Is Displayed
Updated on 2024-10-11 GMT+08:00
Failed to Write Files to HDFS, and "item limit of / is exceeded" Is Displayed
Symptom
The client or upper-layer component logs indicate that a file fails to be written to a directory on HDFS. The error information is as follows:
The directory item limit of /tmp is exceeded: limit=5 items=5.
Cause Analysis
- The run log file /var/log/Bigdata/hdfs/nn/hadoop-omm-namenode-XXX.log of the client or NameNode contains error information "The directory item limit of /tmp is exceeded:." The error message indicates that the number of files in the /tmp directory exceeds 1048576.
2018-03-14 11:18:21,625 | WARN | IPC Server handler 62 on 25000 | DIR* NameSystem.startFile: /tmp/test.txt The directory item limit of /tmp is exceeded: limit=1048576 items=1048577 | FSNamesystem.java:2334
- The dfs.namenode.fs-limits.max-directory-items parameter specifies the maximum number of directories or files that are not in recursion relationship in a single directory. The default value is 1048576. The value ranges from 1 to 6400000.
Solution
- Check whether it is normal that the directory contains more than one million files that are not in recursion relationship. If it is normal, increase the value of the HDFS parameter dfs.namenode.fs-limits.max-directory-items and restart the HDFS NameNode for the modification to take effect.
- If it is abnormal, delete unnecessary files.
Parent topic: Using HDFS
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
The system is busy. Please try again later.
For any further questions, feel free to contact us through the chatbot.
Chatbot