Updated on 2023-04-28 GMT+08:00

Running a Loader Job by Using Commands

Scenario

Generally, users can manually manage data import and export jobs on the Loader UI. If you need to update and run Loader jobs by executing the shell script, you must configure the installed Loader client.

Loader is incompatible with the client of an earlier version. If you reinstall the cluster or the Loader service, download and install the client again, and then use the client.

Prerequisites

  • The Loader client has been installed. During the installation of the Loader client using a non-root user, if another user wants to use the client, the user needs to be authorized by the user who installs the client or a user with more rights (the Loader client installation directory needs to be granted with right 755). Please pay attention to the security problems after the authorization.
  • The user for accessing the Loader service has been created. If the user is a machine-machine user, the keytab file must be downloaded..

Procedure

  1. Configure the Loader shell client.

    1. Log in to the node where the client is located as the user who installs the client.
    2. Run the following command to disable logout upon timeout:

      TMOUT=0

      After the operations in this section are complete, run the TMOUT=Timeout interval command to restore the timeout interval in a timely manner. For example, TMOUT=600 indicates that a user is logged out if the user does not perform any operation within 600 seconds.

    3. Run the following command to go to the Loader client installation directory, for example, /opt/client/Loader:

      cd /opt/client/Loader

    4. Run the following command to configure environment variables:

      source/opt/client/bigdata_env

    5. If the cluster is in security mode, run the following command to authenticate the user. In normal mode, user authentication is not required.

      kinit Component service user

    6. Run the following command to modify the tool authorization configuration file login-info.xml, save the file, and exit. For the parameters in the configuration file, see Table 1.
      vi loader-tools-1.99.3/loader-tool/job-config/login-info.xml
      Table 1 Parameters of login-info.xml

      Parameter

      Description

      hadoop.config.path

      Storage directory of the core-site.xml, hdfs-site.xml, and krb5.conf configuration files of the MRS cluster. These three files are stored in the Loader Client installation directory/Loader/loader-tools-1.99.3/loader-tool/hadoop-config/ directory by default.

      authentication.type

      Authentication type of the Loader service. Set this parameter based on MRS cluster authentication mode.

      • kerberos indicates the security mode.
      • simple indicates the normal mode.

      user.keytab

      Whether to use the keytab file for authentication. The options are true, and false.

      authentication.user

      User for login when the normal mode or password authentication is used.

      In the keytab login mode, this parameter does not need to be set.

      authentication.password

      Encrypted password of the user for accessing the Loader service if the keytab file authentication is not used in the security mode.

      NOTE:

      Run the following command to encrypt the password as the user who installs the client. When the encryption tool runs for the first time, a random dynamic key is automatically generated and stored in .loader-tools.key. The encryption tool uses this dynamic key to encrypt passwords every time. After .loader-tools.key is deleted, a new random key will be generated and stored in .loader-tools.key when the encryption tool runs.

      sh Loader client installation directory/Loader/loader-tools-1.99.3/encrypt_tool password

      authentication.principal

      Machine-Machine username for accessing the Loader service when the keytab file authentication is used in the security mode.

      authentication.keytab

      Absolute keytab file directory of the Machine-Machine user for accessing the Loader service when the keytab file authentication is used in the security mode.

      zookeeper.quorum

      IP address and port for accessing ZooKeeper. The value format is IP1:port,IP2:port,IP3:port. The default port number is 2181.

      sqoop.server.list

      Floating IP address and port for accessing Loader. The value format is floatip:port. The default port number is 21351.

  2. Use the Loader shell client.

    1. Run the following command to go to the Loader shell client directory. For example, if the Loader client installation directory is /opt/client/Loader, run the following command:

      cd /opt/client/Loader/loader-tools-1.99.3/shell-client/

    2. Run the following command to use the Loader shell client to run a job:

      ./submit_job.sh -n <arg> -u <arg> -jobType <arg> -connectorType <arg> -frameworkType <arg>

      Table 2 Parameters of the Loader shell client tool

      Parameter

      Description

      -n

      (Mandatory) Job name.

      -u

      (Mandatory)

      If the parameter is set to y, the job parameters are updated and the job is executed. In this scenario, parameters -jobType, -connectorType, and -frameworkType need to be set. If the parameter is set to n, the job is directly executed without updating parameters.

      -jobType

      Job type. This parameter is mandatory when -u is set to y.

      import indicates the data import job. export indicates the data export job.

      -connectorType

      Connector type. This parameter is mandatory when -u is set to y. Parameters of external data sources can be modified as required.

      sftp indicates the connector is an SFTP connector.

      • In a data import job, you can modify the source file input path -inputPath, the source file encode format -encodeType, and the suffix -suffixName added to the input file after the source file is imported.
      • In a data export job, you can modify the output path -outputPath or the name of the exported file.

      rdb indicates the connector is a relational database connector.

      • In a data import job, you can modify the database mode name -schemaName, table name -tableName, SQL statement -sql, names of columns to be imported -columns, and names of partition columns -partitionColumn.
      • In a data export job, you can modify the database mode name -schemaName, table name -tableName, and the temporary table name -stageTableName.

      -frameworkType

      Data storage type on MRS. This parameter is mandatory when -u is set to y. Parameters of data storage types can be modified as required.

      hdfs indicates that the HDFS is used to store data on Hadoop.

      • In a data import job, you can modify the number of started maps -extractors and the storage directory of imported data in the HDFS -outputDirectory.
      • In a data export job, you can modify the number of started maps -extractors, the input path of data exported from the HDFS -inputDirectory, and the file filter criteria of the data export job -fileFilter.

      hbase indicates that HBase is used to store data on MRS. In the data import and export job, you can modify the number of started maps -extractors.

Task Examples

  • Run a job whose name is sftp-hdfs without updating job parameters:

    ./submit_job.sh -n sftp-hdfs -u n

  • Update the input path, encoding type, suffix, output path, and number of started maps of the data import job whose name is sftp-hdfs, and run the job:

    ./submit_job.sh -n sftp-hdfs -u y -jobType import -connectorType sftp -inputPath /opt/tempfile/1 -encodeType UTF-8 -suffixName '' -frameworkType hdfs -outputDirectory /user/user1/tttest -extractors 10

  • Update the database mode, table name, and output path of the data import job whose name is db-hdfs, and run the job.

    ./submit_job.sh -n db-hdfs -u y -jobType import -connectorType rdb -schemaName public -tableName sq_submission -sql '' -partitionColumn sqs_id -frameworkType hdfs -outputDirectory /user/user1/dbdbt