On this page

Show all

Application Scenarios

Updated on 2022-11-18 GMT+08:00

Files of Mapreduce are stored in the HDFS. MapReduce is a programming model used for parallel computation of large data sets (larger than 1 TB). It is advised to use MapReduce when the file being processed cannot be loaded to memory.

It is advised to use Spark if MapReduce is not required.

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback