On this page
Help Center/ MapReduce Service/ Troubleshooting/ Using HDFS/ Error "Failed to place enough replicas" Is Reported When HDFS Reads or Writes Files

Error "Failed to place enough replicas" Is Reported When HDFS Reads or Writes Files

Updated on 2024-12-18 GMT+08:00

Symptom

When a user performs a write operation on HDFS, the message "Failed to place enough replicas:expected…" is displayed.

Cause Analysis

  • The data receiver of the DataNode is unavailable.

    The DataNode log is as follows:

    2016-03-17 18:51:44,721 | WARN | org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@5386659f | hadoopc1h2:25009:DataXceiverServer: | DataXceiverServer.java:158
    java.io.IOException: Xceiver count 4097 exceeds the limit of concurrent xcievers: 4096
    at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:140)
    at java.lang.Thread.run(Thread.java:745) 
  • The disk space configured for the DataNode is insufficient.
  • DataNode heartbeats are delayed.

Solution

  • If the DataNode data receiver is unavailable, add the value of the HDFS parameter dfs.datanode.max.transfer.threads on Manager.
  • If disk space or CPU resources are insufficient, add DataNodes or ensure that disk space and CPU resources are available.
  • If the network is faulty, ensure that the network is available.
Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback