Updated on 2022-12-14 GMT+08:00
Using HDFS
- All NameNodes Become the Standby State After the NameNode RPC Port of HDFS Is Changed
- An Error Is Reported When the HDFS Client Is Used After the Host Is Connected Using a Public Network IP Address
- Failed to Use Python to Remotely Connect to the Port of HDFS
- HDFS Capacity Usage Reaches 100%, Causing Unavailable Upper-layer Services Such as HBase and Spark
- An Error Is Reported During HDFS and Yarn Startup
- HDFS Permission Setting Error
- A DataNode of HDFS Is Always in the Decommissioning State
- HDFS Failed to Start Due to Insufficient Memory
- A Large Number of Blocks Are Lost in HDFS due to the Time Change Using ntpdate
- CPU Usage of a DataNode Reaches 100% Occasionally, Causing Node Loss (SSH Connection Is Slow or Fails)
- Manually Performing Checkpoints When a NameNode Is Faulty for a Long Time
- Common File Read/Write Faults
- Maximum Number of File Handles Is Set to a Too Small Value, Causing File Reading and Writing Exceptions
- A Client File Fails to Be Closed After Data Writing
- File Fails to Be Uploaded to HDFS Due to File Errors
- After dfs.blocksize Is Configured and Data Is Put, Block Size Remains Unchanged
- Failed to Read Files, and "FileNotFoundException" Is Displayed
- Failed to Write Files to HDFS, and "item limit of / is exceeded" Is Displayed
- Adjusting the Log Level of the Shell Client
- File Read Fails, and "No common protection layer" Is Displayed
- Failed to Write Files Because the HDFS Directory Quota Is Insufficient
- Balancing Fails, and "Source and target differ in block-size" Is Displayed
- A File Fails to Be Queried or Deleted, and the File Can Be Viewed in the Parent Directory (Invisible Characters)
- Uneven Data Distribution Due to Non-HDFS Data Residuals
- Uneven Data Distribution Due to the Client Installation on the DataNode
- Handling Unbalanced DataNode Disk Usage on Nodes
- Locating Common Balance Problems
- HDFS Displays Insufficient Disk Space But 10% Disk Space Remains
- An Error Is Reported When the HDFS Client Is Installed on the Core Node in a Common Cluster
- Client Installed on a Node Outside the Cluster Fails to Upload Files Using hdfs
- Insufficient Number of Replicas Is Reported During High Concurrent HDFS Writes
- HDFS Client Failed to Delete Overlong Directories
- An Error Is Reported When a Node Outside the Cluster Accesses MRS HDFS
Parent topic: Troubleshooting
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
The system is busy. Please try again later.
For any further questions, feel free to contact us through the chatbot.
Chatbot