Development Plan
Overview
Suppose a user has logs of netizens' online shopping duration on a weekend. The Spark application needs to be developed to fulfill the following service requirements:
- Collect statistics on female netizens who dwell on online shopping for more than two hours during weekends.
- The log file has three columns separated by commas (,). The first column contains names, the second contains gender, and the third contains dwell duration in minutes.
log1.txt: logs collected on Saturday
LiuYang,female,20 YuanJing,male,10 GuoYijun,male,5 CaiXuyu,female,50 Liyuan,male,20 FangBo,female,50 LiuYang,female,20 YuanJing,male,10 GuoYijun,male,50 CaiXuyu,female,50 FangBo,female,60
log2.txt: logs collected on Sunday
LiuYang,female,20 YuanJing,male,10 CaiXuyu,female,50 FangBo,female,50 GuoYijun,male,5 CaiXuyu,female,50 Liyuan,male,20 CaiXuyu,female,50 FangBo,female,50 LiuYang,female,20 YuanJing,male,10 FangBo,female,50 GuoYijun,male,50 CaiXuyu,female,50 FangBo,female,60
Preparing Data
Save the original log files in HDFS.
- Create two text files input_data1.txt and input_data2.txt on a local computer, and copy log1.txt to input_data1.txt and log2.txt to input_data2.txt.
- Create /tmp/input on HDFS client path, and run the following commands to upload input_data1.txt and input_data2.txt to /tmp/input:
- On the HDFS client, run the following commands to obtain the security authentication:
sourcebigdata_env
kinit<Service user for authentication>
- On the Linux HDFS client, run the hadoop fs -mkdir /tmp/input command (or the hdfs dfs command) to create a directory.
- Go to the /tmp/input directory on the HDFS client, on the Linux HDFS client, run the hadoop fs -put input_data1.txt /tmp/input and hadoop fs -put input_data2.txt /tmp/input commands to upload data files.
- On the HDFS client, run the following commands to obtain the security authentication:
Development Guidelines
Collect statistics on female netizens who dwell on online shopping for more than two hours during weekends.
The process is as follows:
- Read the source file data.
- Filter the data information of the time that female netizens spend online.
- Summarize the total time that each female shopper spends online.
- Filter the information of female netizens who spend more than 2 hours online.
Preparations
For clusters with the security mode enabled, the Spark Core sample code needs to read two files (user.keytab and krb5.conf). The user.keytab and krb5.conf files are authentication files in the security mode. Download the authentication credentials of the user principal on the FusionInsight Manager page. The user in the sample code is sparkuser, change the value to the prepared development user name.
Packaging the Project
- Upload the user.keytab and krb5.conf files to the server where the client is located.
- Use the Maven tool provided by IDEA to pack the project and generate a JAR file. For details, see Commissioning a Spark Application in a Linux Environment.
- Before compilation and packaging, change the paths of the user.keytab and krb5.conf files in the sample code to the actual paths on the client server. For example, /opt/female/user.keytab and /opt/female/krb5.conf.
- The Python sample code does not need to be packaged using Maven.
- Upload the JAR file to any directory (for example, /opt/female/) on the server where the Spark client is located.
Running the Task
Go to the Spark client directory and run the following commands to invoke the bin/spark-submit script to run the code (The class name and file name must be the same as those in the actual code. The following is only an example.):
- Run the Scala and Java sample projects.
bin/spark-submit --class com.huawei.bigdata.spark.examples.FemaleInfoCollection --master yarn --deploy-mode client /opt/female/FemaleInfoCollection-1.0.jar <inputPath>
<inputPath> indicates the input path in HDFS.
- Run the Python sample project.
- The Python sample code does not provide authentication information. Configure --keytab and --principal to specify authentication information.
bin/spark-submit --master yarn --deploy-mode client --keytab /opt/FIclient/user.keytab --principal sparkuser /opt/female/SparkPythonExample/collectFemaleInfo.py <inputPath>
<inputPath> indicates the input path in HDFS.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot