Using Hadoop from Scratch
You can use Hadoop to submit wordcount jobs. Wordcount is the most classic Hadoop job and is used to count the number of words in massive text.
Procedure
- Prepare the wordcount program.
Multiple open source Hadoop sample programs are provided, including wordcount. You can download the Hadoop sample program from https://dist.apache.org/repos/dist/release/hadoop/common/.
For example, choose hadoop-x.x.x. On the page that is displayed, click hadoop-x.x.x.tar.gz to download it. Then, decompress it to obtain hadoop-mapreduce-examples-x.x.x.jar (the Hadoop sample program) from hadoop-x.x.x\share\hadoop\mapreduce. The hadoop-mapreduce-examples-x.x.x.jar package contains the wordcount program.
hadoop-x.x.x indicates the Hadoop version. Choose a version based on your requirements.
- Prepare data files.
There is no format requirement for data files. Prepare one or more .txt files. The following are examples of the .txt file:
qwsdfhoedfrffrofhuncckgktpmhutopmma jjpsffjfjorgjgtyiuyjmhombmbogohoyhm jhheyeombdhuaqqiquyebchdhmamdhdemmj doeyhjwedcrfvtgbmojiyhhqssddddddfkf kjhhjkehdeiyrudjhfhfhffooqweopuyyyy
- Upload data to OBS.
- Log in to OBS Console.
- Click Parallel File System and choose Create Parallel File System to create a file system named wordcount01.
wordcount01 is only an example. The file system name must be globally unique. Otherwise, the parallel file system fails to be created.
- In the OBS file system list, click wordcount01 and choose Files > Create Folder to create the program and input folders, as shown in Figure 1.
- program: stores user programs.
- input: stores user data files.
- Go to the program folder, choose Upload File > add file, select the program package downloaded in 1 from the local host, and click Upload. After the upload is complete, the page shown in Figure 2 is displayed.
- Go to the input folder and upload the data file prepared in 2 to the input folder. After the upload is complete, the page shown in Figure 3 is displayed.
- Log in to the MRS console. In the navigation pane on the left, click Clusters and choose Active Clusters. Click the cluster name. The cluster must contain Hadoop components.
- Submit the wordcount job.
On the MRS console, click the Create. The Create Job page is displayed.
tab and clickFigure 4 wordcount job
- Set Type to MapReduce.
- Set Name to mr_01.
- Set the path of the executable program to the address of the program stored on the OBS, for example, obs://wordcount01/program/hadoop-mapreduce-examples-x.x.x.jar.
- Enter wordcount obs://wordcount01/input/ obs://wordcount01/output/ in the Parameter pane.
- Replace the OBS file system name in obs://wordcount01/input/ with the actual name of the file system created in the environment.
- Replace the OBS file system name in obs://wordcount01/output/ with the name of the file system created in the actual environment. Replace output with a directory that does not exist based on site requirements.
- Service Parameter can be left blank.
A job can be submitted only when the cluster is in the Running state.
After a job is submitted successfully, it is in the Accepted state by default. You do not need to manually execute the job.
- View the job execution result.
- Go to the
It takes some time to run the job. After the job is complete, refresh the job list
Once a job has succeeded or failed, you cannot execute it again. However, you can add or copy a job, and set job parameters to submit a job again.
tab page and check whether the job is successfully executed.
- Log in to the OBS console, go to the OBS path, and view the job output information.
You can view output files in the output directory created in 5. You need to download the file to the local host and open it in text format, as shown in Figure 5.
- Go to the
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot