Application Scenarios
Files of Mapreduce are stored in the HDFS. MapReduce is a programming model used for parallel computation of large data sets (larger than 1 TB). It is advised to use MapReduce when the file being processed cannot be loaded to memory.
It is advised to use Spark if MapReduce is not required.
Did this article solve your problem?
Thank you for your score!Your feedback would help us improve the website.