Help Center/ MapReduce Service/ Getting Started/ Creating and Using a Hadoop Cluster for Offline Analysis
Updated on 2024-08-28 GMT+08:00

Creating and Using a Hadoop Cluster for Offline Analysis

Scenario

This topic describes how to create a Hadoop cluster for offline analysis and how to submit a wordcount job through the cluster client. A wordcount job is a classic Hadoop job that counts words in massive amounts of text.

The Hadoop cluster uses the open-source Hadoop ecosystem components, including YARN for cluster resource management, and Hive and Spark for offline large-scale distributed data storage and compute to provide massive data analysis and query capabilities.

Procedure

Before you start, complete operations described in Preparations. Then, follow these steps:

  1. Creating an MRS cluster: Create a Hadoop analysis cluster of MRS 3.1.5.
  2. Installing the Cluster Client: Download and install the MRS cluster client.
  3. Preparing Applications and Data: Prepare the data files required for running the wordcount sample program on the MRS cluster client.
  4. Submitting a Job and Viewing the Result: Submit a wordcount data analysis job on the cluster client and view the execution result.

Preparations

Step 1: Creating an MRS Cluster

  1. Go to the Buy Cluster page.
  2. Search for MapReduce Service in the service list and enter the MRS console.
  3. Click Buy Cluster. The Quick Config tab is displayed.
  4. Configure the cluster as you need. In this example, a pay-per-use MRS 3.1.5 cluster will be created. For more details about how to configure the parameters, see Quickly Creating a Cluster.

    Table 1 MRS cluster parameters

    Parameter

    Description

    Example Value

    Billing Mode

    Billing mode of the cluster you want to create. MRS provides two billing modes: yearly/monthly and pay-per-use.

    Pay-per-use is a postpaid billing mode. You pay as you go and pay for what you use. The cluster usage is calculated by the second but billed every hour.

    Pay-per-use

    Region

    Region where the MRS resources to be requested belong.

    MRS clusters in different regions cannot communicate with each other over an intranet. For lower network latency and quick resource access, select the nearest region.

    CN-Hong Kong

    Cluster Name

    Name of the MRS cluster you want to create.

    mrs_demo

    Cluster Type

    A range of clusters that accommodate diverse big data demands.

    You can select a Custom cluster to run a wide range of analytics components supported by MRS.

    Custom

    Version Type

    Service type of the MRS

    Normal

    Cluster Version

    Version of the MRS cluster. Supported open-source components and their functions vary depending on the cluster version. You are advised to select the latest version.

    MRS 3.1.5

    Component

    Cluster templates containing preset opensource components you will need for your business.

    Hadoop Analysis Cluster

    AZ

    Available AZ associated with the cluster region.

    AZ1

    VPC

    VPC where you want to create the cluster. You can click View VPC to view the name and ID. If no VPC is available, create one.

    vpc-default

    Subnet

    Subnet where your cluster belongs. You can access the VPC management console to view the names and IDs of existing subnets in the VPC. If no subnet is created under the VPC, click Create Subnet to create one.

    subnet-default

    Cluster Node

    Cluster node details.

    Default value

    Kerberos Authentication

    Whether Kerberos authentication is enabled.

    Disabled

    Username

    Username for logging in to the cluster management page and the ECS node.

    admin/root

    Password

    User password for logging in to the cluster management page and the ECS node.

    -

    Confirm Password

    Enter the user password again.

    -

    Enterprise Project

    Enterprise project to which the cluster belongs.

    default

    Secure Communications

    Select the check box to agree to use the access control rules.

    Checked

    Figure 1 Buying a Hadoop analysis cluster

  5. Click Buy Now. A page is displayed showing that the task has been submitted.
  6. Click Back to Cluster List. You can view the status of the newly created cluster on the Active Clusters page.

    Wait for the cluster creation to complete. The initial status of the cluster is Starting. After the cluster is created, the cluster status becomes Running.

Step 2: Installing the Cluster Client

You need to install a cluster client to connect to component services in the cluster, remotely access the client shell, and submit jobs.

The client can be installed on a node in or outside the cluster. This guide describes how to install the client on the Master1 node in the cluster.

  1. Click the MRS cluster name in the cluster list to go to the dashboard page.
  2. Click Access Manager next to MRS Manager. In the displayed dialog box, select EIP and configure the EIP information.

    For the first access, click Manage EIPs to purchase an EIP on the EIP console. Go back to the Access MRS Manager dialog box, refresh the EIP list, and select the EIP.

  3. Select the confirmation check box and click OK to log in to the FusionInsight Manager of the cluster.

    The username for logging in to FusionInsight Manager is admin, and the password is the one configured during cluster purchase.

  4. On the displayed Homepage page, click next to the cluster name and click Download Client to download the cluster client.

    Figure 2 Downloading a client

    In the Download Cluster Client dialog box, set the following parameters:

    • Set Select Client Type to Complete Client.
    • For Select Platform Type, select the architecture of the node where the client is to be installed, for example, x86_64.

      To check the architecture of a node in the cluster, click Hosts on FusionInsight Manager navigation pane on the top and click the target node name to go to the basic information page.

    • Retain the default path for Save to Path. The generated file will be saved in the /tmp/FusionInsight-Client directory on the active OMS node (usually the Master1 node) of the cluster.
    Figure 3 Downloading the cluster client

    Click OK and wait until the client software is generated.

  5. Go back to the MRS console and click the cluster name in the cluster list. Go to the Nodes tab, click the name of the node that contains master1. In the upper right corner of the ECS details page, click Remote Login to log in to the Master1 node.

    Figure 4 Checking the Master1 node

  6. Log in to the Master1 node as user root. The password is the one you set for the root user during cluster purchase.
  7. Switch to the directory where the client software package is stored and decompress the package.

    cd /tmp/FusionInsight-Client/

    tar -xvf FusionInsight_Cluster_1_Services_Client.tar

    tar -xvf FusionInsight_Cluster_1_Services_ClientConfig.tar

  8. Go to the directory where the installation package is stored and install the client.

    cd FusionInsight_Cluster_1_Services_ClientConfig

    Install the client to a specified directory. (If the directory exists, it must be empty.)

    For example, if the client is installed in the /opt/client directory, run the following command:

    ./install.sh /opt/client

    Wait until the client installation is complete.

    ...
    ... component client is installed successfully
    ...

Step 3: Preparing Applications and Data

You can run the wordcount sample program preset in the cluster client on the created cluster, or develop a big data application and upload it to the cluster.

This topic uses the wordcount sample program on the MRS cluster client as an example. You need to prepare the data files required for running the wordcount sample program.

  1. Log in to the Master1 node as user root.
  2. Prepare data files.

    For example, the file names are wordcount1.txt and wordcount2.txt, and the content is as follows:

    vi /opt/wordcount1.txt

    hello word
    hello wordcount

    vi /opt/wordcount2.txt

    hello mapreduce
    hello hadoop

  3. Switch to the client installation directory, configure environment variables, and create an HDFS directory for storing sample data, for example, /user/example/input.

    cd /opt/client

    source bigdata_env

    hdfs dfs -mkdir /user/example/input

  4. Upload the sample data to HDFS.

    hdfs dfs -put /opt/wordcount1.txt /user/example/input

    hdfs dfs -put /opt/wordcount2.txt /user/example/input

Step 4: Submitting a Job and Viewing the Result

  1. Log in to the client node (Master1) as user root.
  2. Submit a wordcount job, read source data for analysis, and output the execution result to the HDFS.

    cd /opt/client

    source bigdata_env

    hadoop jar HDFS/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.1-*.jar wordcount "/user/example/input/*" "/user/example/output/"

    ...
    	File Input Format Counters 
    		Bytes Read=56
    	File Output Format Counters 
    		Bytes Written=48
    • /user/example/output/ indicates the address for storing job output files on the HDFS. Set it to a directory that does not exist.
    • The name of the hadoop-mapreduce-examples-3.3.1-*.jar file varies depending on the cluster client version. Use the actual name.

  3. Query job execution results.

    1. Run the following command to view the job output file:

      hdfs dfs -ls /user/example/output/

      ...
      ... /user/example/output/_SUCCESS
      ... /user/example/output/part-r-0000
    2. The output is saved in the HDFS file system. You can run a command to download the output to the local PC and view it.

      The following command is an example:

      hdfs dfs -get /user/example/output/part-r-00000 /opt

      cat /opt/part-r-00000

      The content of the part-r-00000 file is as follows:

      hadoop	1
      hello	4
      mapreduce	1
      word	1
      wordcount	1

  4. View job run logs.

    1. Log in to FusionInsight Manager of the target cluster as user admin and choose Cluster > Services > Yarn.
    2. Click the ResourceManager(xxx,Active) link in the row where the ResourceManager Web UI is.
    3. On the All Applications page, click the ID of the target job to view the job details.

      On the All Applications page, you can confirm a task based on the task submission time and the user name that submits the task.

      Figure 5 Checking job details

Related Information

Hadoop components include HDFS, YARN, and MapReduce. You can run jobs to analyze or view offline data. For details, see Using HDFS, Using MapReduce, or Using YARN.