On this page

Show all

Application Scenarios

Updated on 2022-09-14 GMT+08:00

Hadoop distribute file system (HDFS) runs on commodity hardware. It provides high error tolerance. In addition, it supports high data access throughput and is suitable for applications that involve large-scale data sets.

The HDFS is applicable to the following scenarios:

  • Massive data processing (higher than the TB or PB level).
  • High-throughput demanding scenarios.
  • High-reliability demanding scenarios.
  • Good-scalability demanding scenarios.

The HDFS is not applicable to the scenarios that involve a large number of small files, random write, and low-latency read.

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback