HDFS Development Plan
Scenario Description
You can quickly learn and master the HDFS development process and know key interface functions in a typical application scenario.
Service operation objects of HDFS are files. File operations covered by sample codes include creating a folder, writing data to a file, appending file content, reading a file, and deleting a file or folder. You can learn how to perform other operations on the HDFS, such as setting file access permissions, based on sample codes.
Sample codes are described in the following sequence:
- Initialize HDFS. For details, see Initializing HDFS.
- Write data to a file. For details, see Writing Data to an HDFS File.
- Append file content. For details, see Appending HDFS File Content.
- Read a file. For details, see Reading an HDFS File.
- Delete a file. For details, see Deleting an HDFS File.
- Colocation HDFS Colocation
- Set storage policies. For details, see Setting HDFS Storage Policies.
- Access OBS. For details, see Using HDFS to Access OBS.
Development Guidelines
Determine functions to be developed based on the preceding scenario description. The following example describes how to upload, query, append, and delete information about a new employee in seven parts.
- Pass the Kerberos authentication.
- Call the mkdir API in fileSystem to create a directory.
- Call the dowrite API of HdfsWriter to write information.
- Call the open API in fileSystem to read the file.
- Call the doAppend API of HdfsWriter to append information.
- Call the deleteOnExit API in fileSystem to delete the file.
- Call the delete API in fileSystem to delete the folder.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot