Updated on 2024-09-23 GMT+08:00

Running a MapReduce Job

MRS allows you to submit and run your own programs, and get the results. This section will show you how to submit a MapReduce job in an MRS cluster.

MapReduce jobs are used to submit Hadoop JAR programs to quickly process a large amount of data in parallel. MapReduce is a distributed data processing mode.

You can create a job online and submit it for running on the MRS console, or submit a job in CLI mode on the MRS cluster client.

Prerequisites

  • You have uploaded the program packages and data files required by jobs to OBS or HDFS.
  • If the job program needs to read and analyze data in the OBS file system, you need to configure storage-compute decoupling for the MRS cluster. For details, see Configuring Storage-Compute Decoupling for an MRS Cluster.

Submitting a Job on the Console

  1. Log in to the MRS console.
  2. On the Active Clusters page, select a running cluster and click its name to switch to the cluster details page.
  3. In the Basic Information area of the Dashboard page, click Synchronize on the right side of IAM User Sync to synchronize IAM users.

    Perform this step only when Kerberos authentication is enabled for the cluster.

    • After the IAM user synchronization is complete, wait for 5 minutes before submitting a job. For details about IAM user synchronization, see Synchronizing IAM Users to MRS..
    • When the policy of the user group an IAM user belongs to changes from MRS ReadOnlyAccess to MRS CommonOperations, MRS FullAccess, or MRS Administrator, or vice versa, it takes time for the cluster node's System Security Services Daemon (SSSD) cache to refresh. To prevent job submission failure, wait for five minutes after user synchronization is complete before submitting the job with the new policy.
    • If the IAM username contains spaces (for example, admin 01), jobs cannot be added.

  4. Click Job Management. On the displayed job list page, click Create.
  5. In Type, select MapReduce. Configure other job information.

    Figure 1 Adding a MapReduce job
    Table 1 Job configuration information

    Parameter

    Description

    Example

    Name

    Job name. It contains 1 to 64 characters. Only letters, digits, hyphens (-), and underscores (_) are allowed.

    mapreduce_job

    Program Path

    Path of the program package to be executed. You can enter the path or click HDFS or OBS to select a file.

    • The value contains a maximum of 1,023 characters. It cannot contain special characters (;|&>,<'$) and cannot be left blank or all spaces.
    • The OBS program path should start with obs://, for example, obs://wordcount/program/XXX.jar. The HDFS program path should start with hdfs://, for example, hdfs://hacluster/user/XXX.jar.
    • The MapReduce job execution program must end with .jar.

    obs://wordcount/program/test.jar

    Parameters

    (Optional) Key parameters for program execution. Use spaces to separate multiple parameters.

    Configuration format: Program class name Data input path Data output path

    • Program class name: It is specified by a function in your program. MRS is responsible for transferring parameters only.
    • Data input path: Click HDFS or OBS to select a path or manually enter a correct path.
    • Data output path: output path of the data processing result. Enter a directory that does not exist. The parameter contains a maximum of 150,000 characters. It cannot contain special characters ;|&><'$, but can be left blank.
      CAUTION:

      When entering a parameter containing sensitive information (for example, login password), you can add an at sign (@) before the parameter name to encrypt the parameter value. This prevents the sensitive information from being persisted in plaintext.

      When you view job information on the MRS console, the sensitive information is displayed as *.

      Example: username=testuser @password=User password

    -

    Service Parameter

    (Optional) Service parameters for the job.

    To modify the current job, change this parameter. For permanent changes to the entire cluster, refer to Modifying the Configuration Parameters of an MRS Cluster Component and modify the cluster component parameters accordingly.

    Click on the right to add more parameters.

    If a job needs to access OBS using AK/SK, add the following service configuration parameters:

    • fs.obs.access.key: key ID for accessing OBS.
    • fs.obs.secret.key: key corresponding to the key ID for accessing OBS.

    -

    Command Reference

    Command submitted to the background for execution when a job is submitted.

    yarn jar hdfs://hacluster/user/test.jar

  6. Confirm job configuration information and click OK.
  7. After the job is submitted, you can view the job running status and execution result in the job list. After the job status changes to Completed, you can view the analysis result of related programs.

Submitting a Job Using the Cluster Client

  1. Install the MRS cluster client. For details, see Installing an MRS Cluster Client.

    The MRS cluster comes with a client installed for job submission by default, which can also be used directly. For MRS 3.x and later versions, the default client installation path is /opt/Bigdata/client on the Master node. For versions earlier than MRS 3.x, the default client installation path is /opt/client on the Master node.

  2. Log in to the node where the client is located as the MRS cluster client installation user.
  3. Initialize environment variables.

    cd /opt/Bigdata/client

    source bigdata_env

  4. Perform authentication if Kerberos authentication has been enabled for the current cluster.

    Skip this step for normal clusters.

    kinit MRScluster service user

    The MRS cluster service user needs to create a service user with the job submission permission on Manager. For details, see Creating an MRS Cluster User.

    Example:

    kinit testuser

  5. Copy the program in the OBS file system to the node where the cluster client is located.

    hadoop fs -Dfs.obs.access.key=AK for accessing OBS -Dfs.obs.secret.key=SK for accessing OBS -copyToLocal Source path of the application Destination path of the application

    Example:

    hadoop fs -Dfs.obs.access.key=XXXX -Dfs.obs.secret.key=XXXX -copyToLocal "obs://mrs-word/program/hadoop-mapreduce-examples-XXX.jar" "/home/omm/hadoop-mapreduce-examples-XXX.jar"

    • Commands carrying authentication passwords pose security risks. Disable historical command recording before running such commands to prevent information leakage.
    • To obtain the AK and SK, log in to the OBS console and choose My Credentials > Access Keys from the username drop-down list in the upper right corner of the page.

  6. Submit a wordcount job. If data needs to be read from OBS or outputted to OBS, the AK/SK parameters need to be added.

    hadoop jar Application wordcount Input file path Output file path

    Example:

    hadoop jar /home/omm/hadoop-mapreduce-examples-XXX.jar wordcount -Dfs.obs.access.key=XXXX -Dfs.obs.secret.key=XXXX "obs://mrs-word/input/*" "obs://mrs-word/output/"

    • Input file path is the path for storing job input files on OBS.
    • Output file path is the path for storing the job output file on OBS. Set it to a directory that does not exist.