Updated on 2025-08-19 GMT+08:00
Using HDFS
- HDFS Capacity Reaches 100%, Causing Unavailable Upper-Layer Services Such as HBase and Spark
- Error Message "Permission denied" Is Displayed When HDFS and Yarn Are Started
- An Error Is Reported When a Node Outside the Cluster Accesses MRS HDFS
- HDFS NameNode Instances Become Standby After the RPC Port Is Changed
- HDFS NameNode Failed to Start Due to Insufficient Memory
- Manually Performing Checkpoints When a NameNode Is Faulty for a Long Time
- It Takes a Long Time to Restart NameNode After a Large Number of Files Are Deleted
- NameNode Fails to Be Restarted Due to EditLog Discontinuity
- The standby NameNode Fails to Be Started After It Is Powered Off During Metadata Storage
- The Standby NameNode Fails to Be Started Because It Is Not Started for a Long Time
- A DataNode of HDFS Is Always in the Decommissioning State
- CPU Usage of DataNodes Is Close to 100% Occasionally, Causing Node Loss
- Failed to Decommission a DataNode Due to HDFS Block Loss
- DataNode Fails to Be Started When the Number of Disks Defined in dfs.datanode.data.dir Equals the Value of dfs.datanode.failed.volumes.tolerated
- Failed to Write Files to HDFS, and Error Message "item limit of xxx is exceeded" Is Displayed
- Error "Failed to place enough replicas" Is Reported When HDFS Reads or Writes Files
- HDFS File Fails to Be Read, and Error Message "FileNotFoundException" Is Displayed
- HDFS File Read Fails, and Error Message "No common protection layer" Is Displayed
- Why Is "java.net.SocketException" Reported When Data Is Written to HDFS
- Insufficient Number of Replicas Is Reported During High Concurrent HDFS Writes
- HDFS Client File Fails to Be Closed After Data Writing
- Failed to Write Files Because the HDFS Directory Quota Is Insufficient
- File Fails to Be Uploaded to HDFS Due to File Errors
- HDFS Files Fail to Be Uploaded When the Client Is Installed on a Node Outside the Cluster
- Failed to Query or Delete HDFS Files
- Maximum Number of File Handles Is Set to a Too Small Value, Causing File Reading and Writing Exceptions
- Adjusting the Log Level of the HDFS Shell Client
- Error Message "error creating DomainSocket" Is Displayed When the HDFS Client Installed in a Normal Cluster Is Used
- Error Message "Source and target differ in block-size" Is Displayed When the distcp Command Is Executed to Copy Files Across Clusters
- An Error Is Reported When DistCP Is Used to Copy an Empty Folder
- HDFS Client Failed to Delete Overlong Directories
- "ArrayIndexOutOfBoundsException: 0" Occurs When HDFS Invokes getsplit of FileInputFormat
- A Large Number of Blocks Are Lost in HDFS due to the Time Change Using ntpdate
- After dfs.blocksize Is Configured on the UI and Data Is Uploaded, the Block Size Does Not Change
- An Error Is Reported When the HDFS Client Is Connected Through a Public IP Address
- Failed to Use Python to Remotely Connect to the Port of HDFS
- Locating Common Balance Problems
- Uneven Data Distribution Due to Non-HDFS Data Residuals
- Uneven Data Distribution Due to HDFS Client Installation on the DataNode
- Unbalanced DataNode Disk Usages of a Node
- HDFS Displays Insufficient Disk Space But 10% Disk Space Remains
- "ALM-12027 Host PID Usage Exceeds the Threshold" Is Generated for a NameNode
- ALM-14012 JournalNode Is Out of Synchronization Is Generated in the Cluster
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
The system is busy. Please try again later.
For any further questions, feel free to contact us through the chatbot.
Chatbot