MapReduce Development Plan
Scenario Description
Develop a MapReduce application to perform the following operations on logs about dwell durations of netizens for shopping online.
- Collect statistics on female netizens who dwell on online shopping for more than 2 hours on the weekend.
- The first column in the log file records names, the second column records gender, and the third column records the dwell duration in the unit of minute. Three columns are separated by comma (,).
log1.txt: logs collected on Saturday
LiuYang,female,20 YuanJing,male,10 GuoYijun,male,5 CaiXuyu,female,50 Liyuan,male,20 FangBo,female,50 LiuYang,female,20 YuanJing,male,10 GuoYijun,male,50 CaiXuyu,female,50 FangBo,female,60
log2.txt: logs collected on Sunday
LiuYang,female,20 YuanJing,male,10 CaiXuyu,female,50 FangBo,female,50 GuoYijun,male,5 CaiXuyu,female,50 Liyuan,male,20 CaiXuyu,female,50 FangBo,female,50 LiuYang,female,20 YuanJing,male,10 FangBo,female,50 GuoYijun,male,50 CaiXuyu,female,50 FangBo,female,60
Data Planning
Save the original log files in the HDFS.
- Create two text files input_data1.txt and input_data2.txt on a local computer, and copy log1.txt to input_data1.txt and log2.txt to input_data2.txt.
- Create the /tmp/input folder in the HDFS, and run the following commands to upload input_data1.txt and input_data2.txt to the /tmp/input directory:
- On the Linux HDFS client, run the hdfs dfs -mkdir /tmp/input.
- On the Linux HDFS client, run the hdfs dfs -put local_filepath /tmp/input command.
Development Guidelines
Collect statistics on female netizens who dwell on online shopping for more than 2 hours on the weekend.
To achieve the objective, the process is as follows:
- Read original file data.
- Filter data information of the time that female netizens spend online.
- Summarize the total time that each female netizen spends online.
- Filter the information of female netizens who spend more than 2 hours online.
Function Description
Collect statistics on female netizens who dwell on online shopping for more than 2 hours on the weekend.
The operation is performed in three steps.
- Filter the dwell duration of female netizens in original files using the CollectionMapper class inherited from the Mapper abstract class.
- Summarize the dwell duration of each female netizen, and output information about female netizens who dwell online for more than 2 hours using the CollectionReducer class inherited from the Reducer abstract class.
- Use the main method to create a MapReduce job and then submit the MapReduce job to the Hadoop cluster.
Sample Code
The following code snippets are used as an example. For complete codes, see the com.huawei.bigdata.mapreduce.examples.FemaleInfoCollector class.
Example 1: The CollectionMapper class defines the map() and setup() methods of the Mapper abstract class.
public static class CollectionMapper extends Mapper<Object, Text, Text, IntWritable> { // Delimiter String delim; // Gender screening String sexFilter; // Name information private Text nameInfo = new Text(); // Output key-values must be serialized. private IntWritable timeInfo = new IntWritable(1); /** * Distributed computing * * @param key Object: Location offset of the original file. * @param value Text: A line of character data in the original file. * @param context Context: Output parameter * @throws IOException , InterruptedException */ public void map(Object key, Text value, Context context) throws IOException, InterruptedException { String line = value.toString(); if (line.contains(sexFilter)) { // A line of character string data has been read. String name = line.substring(0, line.indexOf(delim)); nameInfo.set(name); // Obtain the dwell duration. String time = line.substring(line.lastIndexOf(delim) + 1, line.length()); timeInfo.set(Integer.parseInt(time)); // The Map task outputs a key-value pair. context.write(nameInfo, timeInfo); } } /** * Invoke map to perform some initial operations. * * @param context Context */ public void setup(Context context) throws IOException, InterruptedException { // Obtain configuration information using Context. delim = context.getConfiguration().get("log.delimiter", ","); sexFilter = delim + context.getConfiguration() .get("log.sex.filter", "female") + delim; } }
Example 2: The CollectionReducer class defines the reduce() method of the Reducer abstract class.
public static class CollectionReducer extends Reducer<Text, IntWritable, Text, IntWritable> { // Statistics results private IntWritable result = new IntWritable(); // Total time threshold private int timeThreshold; /** * @param key Text: Key after Mapper * @param values Iterable: all statistical results of the same key * @param context Context * @throws IOException , InterruptedException */ public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } // No results are outputted if the time is less than the threshold. if (sum < timeThreshold) { return; } result.set(sum); // In the reduce output, key indicates netizen information, and value indicates the total dwell duration of the netizen. context.write(key, result); } /** * The setup() method is called only once before the map() method of a map task or the reduce() method of a reduce task is called. * * @param context Context * @throws IOException , InterruptedException */ public void setup(Context context) throws IOException, InterruptedException { // Obtain configuration information using Context. timeThreshold = context.getConfiguration().getInt( "log.time.threshold", 120); } }
Example 3: Use the main() method to create a job, set parameters, and submit the job to the Hadoop cluster.
public static void main(String[] args) throws Exception { // Initialize environment variables. Configuration conf = new Configuration(); // Obtain input parameters. String[] otherArgs = new GenericOptionsParser(conf, args) .getRemainingArgs(); if (otherArgs.length != 2) { System.err.println("Usage: collect female info <in> <out>"); System.exit(2); } // Check whether the security mode is used. if("kerberos".equalsIgnoreCase(conf.get("hadoop.security.authentication"))){ //security mode System.setProperty("java.security.krb5.conf", KRB); LoginUtil.login(PRINCIPAL, KEYTAB, KRB, conf); } // Initialize the job object. Job job = Job.getInstance(conf, "Collect Female Info"); job.setJarByClass(FemaleInfoCollector.class); // Set map and reduce classes to be executed, or specify the map and reduce classes using configuration files. job.setMapperClass(CollectionMapper.class); job.setReducerClass(CollectionReducer.class); // Set the combiner class. The combiner class is not used by default. Classes same as the reduce class are used. // Exercise caution when using the combiner class. You can specify it using configuration files. job.setCombinerClass(CollectionCombiner.class); // Set the output type of the job. job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job, new Path(otherArgs[0])); FileOutputFormat.setOutputPath(job, new Path(otherArgs[1])); // Submit the job to a remote environment for execution. System.exit(job.waitForCompletion(true) ? 0 : 1); }
Example 4: CollectionCombiner class combines the mapped data on the Map side to reduce the amount of data transmitted from Map to Reduce.
/** * Combiner class */ public static class CollectionCombiner extends Reducer<Text, IntWritable, Text, IntWritable> { // Intermediate statistical results private IntWritable intermediateResult = new IntWritable(); /** * @param key Text : key after Mapper * @param values Iterable : all results with the same key in this map task * @param context Context * @throws IOException , InterruptedException */ public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } intermediateResult.set(sum); // In the output information, key indicates netizen information, // and value indicates the total online time of the netizen in this map task. context.write(key, intermediateResult); } }
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot