Updated on 2022-06-01 GMT+08:00

Scenario Description and Development Guidelines

Scenario Description

You can quickly learn and master the HDFS development process and know key interface functions in a typical application scenario.

Service operation objects of HDFS are files. File operations covered by sample codes include creating a folder, writing data to a file, appending file content, reading a file, and deleting a file or folder. You can learn how to perform other operations on the HDFS, such as setting file access permissions, based on sample codes.

Sample codes are described in the following sequence:

  1. Initialize HDFS. For details, see Initializing HDFS.
  2. Write data to a file. For details, see Writing Data to a File.
  3. Append file content. For details, see Appending File Content.
  4. Read a file. For details, see Reading a File.
  5. Delete a file. For details, see Deleting a File.
  6. Colocation Colocation
  7. Set storage policies. For details, see Setting Storage Policies.
  8. Access OBS. For details, see Accessing OBS.

Development Guidelines

Determine functions to be developed based on the preceding scenario description. The following example describes how to upload, query, append, and delete information about a new employee in seven parts.

  1. Pass the Kerberos authentication.
  2. Call the mkdir API in fileSystem to create a directory.
  3. Call the dowrite API of HdfsWriter to write information.
  4. Call the open API in fileSystem to read the file.
  5. Call the doAppend API of HdfsWriter to append information.
  6. Call the deleteOnExit API in fileSystem to delete the file.
  7. Call the delete API in fileSystem to delete the folder.