An Error Is Reported When a Large Amount of Data Is Written to HDFS
Symptom
"NotReplicatedYet Exception: Not replicated yet" is occasionally reported when a large amount of data is written to HDFS.
Answer
The possible causes are as follows:
- The HDFS client sends a new block application to the NameNode. The NameNode does not process the application in time. As a result, the application times out.
- The incremental data reporting of DataNodes is too slow. As a result, NameNodes cannot allocate new blocks in a timely manner.
If this error occurs, the job will not become abnormal immediately. The job exception is notified only when the number of retry times exceeds the threshold. Increase the value of the HDFS parameter dfs.client.block.write.retries, for example, set the value to 10.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot