Help Center/
MapReduce Service/
Troubleshooting/
Using HDFS/
Maximum Number of File Handles Is Set to a Too Small Value, Causing File Reading and Writing Exceptions
Updated on 2023-11-30 GMT+08:00
Maximum Number of File Handles Is Set to a Too Small Value, Causing File Reading and Writing Exceptions
Symptom
The maximum number of file handles is set to a too small value, causing insufficient file handles. Writing files to HDFS is slow or file writing fails.
Cause Analysis
- The DataNode log /var/log/Bigdata/hdfs/dn/hadoop-omm-datanode-XXX.log contains exception information "java.io.IOException: Too many open files."
2016-05-19 17:18:59,126 | WARN | org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@142ff9fa | YSDN12:25009:DataXceiverServer: | org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:160) java.io.IOException: Too many open files at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:100) at org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:134) at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:137) at java.lang.Thread.run(Thread.java:745)
- The error indicates insufficient file handles. File handles cannot be opened and data is written to other DataNodes. As a result, writing files is slow or fails.
Solution
- Run the ulimit -a command to check the maximum number of file handles set for the involved node. If the value is small, change it to 640000.
Figure 1 Check the number of file handles.
- Run the vi /etc/security/limits.d/90-nofile.conf command to edit the file and change the number of file handles. If the file does not exist, create one and modify the file as follows:
Figure 2 Changing the number of file handles
- Open another terminal. Run the ulimit -a command to check whether the modification is successful. If the modification fails, perform the preceding operations again.
- Restart the DataNode instance on Manager.
Parent topic: Using HDFS
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
The system is busy. Please try again later.
For any further questions, feel free to contact us through the chatbot.
Chatbot