Updated on 2022-11-04 GMT+08:00

Using the Oozie Client

Scenario

This section describes how to use the Oozie client in an O&M scenario or service scenario.

Prerequisites

  • The client has been installed in a directory, for example, /opt/client. The client directory in the following operations is only an example. Change it based on site requirements.
  • Service component users have been created by the MRS cluster administrator. In security mode, machine-machine users need to download the keytab file. A human-machine user must change the password upon the first login.

Using the Oozie Client

  1. Install the client. For details, see Installing a Client.
  2. Log in to the node where the client is installed as the client installation user.
  3. Run the following command to switch to the client installation directory (change it to the actual installation directory):

    cd /opt/client

  4. Run the following command to configure environment variables:

    source bigdata_env

  5. Check the cluster authentication mode.

    • If the cluster is in security mode, run the following command to authenticate the user: exampleUser indicates the name of the user who submits tasks.

      kinit exampleUser

    • If the cluster is in normal mode, go to 6.

  6. Perform the following operations to configure Hue:

    1. Configure the Spark2x environment (skip this step if the Spark2x task is not involved):

      hdfs dfs -put /opt/client/Spark2x/spark/jars/*.jar /user/oozie/share/lib/spark2x/

      When the JAR package in the HDFS directory /user/oozie/share changes, you need to restart the Oozie service.

    2. Upload the Oozie configuration file and JAR package to HDFS.

      hdfs dfs -mkdir /user/exampleUser

      hdfs dfs -put -f /opt/client/Oozie/oozie-client-*/examples /user/exampleUser/

      • exampleUser indicates the name of the user who submits tasks.
      • If the user who submits the task and other files except job.properties are not changed, client installation directory Oozie/oozie-client-*/examples can be repeatedly used after being uploaded to HDFS.
      • Resolve the JAR file conflict between Spark and Yarn about Jetty.

        hdfs dfs -rm -f /user/oozie/share/lib/spark/jetty-all-9.2.22.v20170606.jar

      • In normal mode, if Permission denied is displayed during the upload, run the following commands:

        su - omm

        source /opt/client/bigdata_env

        hdfs dfs -chmod -R 777 /user/oozie

        exit