Help Center/
MapReduce Service/
User Guide (ME-Abu Dhabi Region)/
Troubleshooting/
Using HDFS/
Common File Read/Write Faults
Updated on 2022-02-22 GMT+08:00
Common File Read/Write Faults
Symptom
When a user performs a write operation on HDFS, the message "Failed to place enough replicas:expected…" is displayed.
Cause Analysis
- The data receiver of the DataNode is unavailable.
The DataNode log is as follows:
2016-03-17 18:51:44,721 | WARN | org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@5386659f | hadoopc1h2:25009:DataXceiverServer: | DataXceiverServer.java:158 java.io.IOException: Xceiver count 4097 exceeds the limit of concurrent xcievers: 4096 at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:140) at java.lang.Thread.run(Thread.java:745)
- The disk space configured for the DataNode is insufficient.
- DataNode heartbeats are delayed.
Solution
- If the DataNode data receiver is unavailable, add the value of the HDFS parameter dfs.datanode.max.transfer.threads on Manager.
- If disk space or CPU resources are insufficient, add DataNodes or ensure that disk space and CPU resources are available.
- If the network is faulty, ensure that the network is available.
Parent topic: Using HDFS
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
The system is busy. Please try again later.
For any further questions, feel free to contact us through the chatbot.
Chatbot