Updated on 2023-08-31 GMT+08:00

Debugging the HCatalog Sample Program

The Hive HCatalog application can run in the Linux environment where the Hive and Yarn clients are installed. After the program code is developed, you can upload the JAR package to the prepared Linux operating environment.

Procedure

  1. On the right of the IntelliJ IDEA home page, click Maven Projects. On the Maven Projects page, choose Project name > Lifecycle and run the clean and compile scripts.

    Figure 1 Maven Projects page

  2. In the lower left corner of the IDEA page, click Terminal to access the terminal. Run the mvn clean install command to compile the package.

    If BUILD SUCCESS is displayed, the compilation is successful, as shown in the following figure. The hcatalog-example-*.jar package is generated in the target directory of the sample project.

    The preceding JAR file names are for reference only. The actual names may vary.

  3. Upload the hcatalog-example-*.jar file generated in the target directory in the 1 to the specified directory on Linux, for example,/opt/hive_client, marked as$HCAT_CLIENT, and ensure that the Hive and YARN clients have been installed. Execute environment variables for the HCAT_CLIENT to take effect.

    export HCAT_CLIENT=/opt/hive_client 

  4. Run the following command to configure environment parameters (client installation path /opt/client is used as an example):

    export HADOOP_HOME=/opt/client/HDFS/hadoop 
    export HIVE_HOME=/opt/client/Hive/Beeline 
    export HCAT_HOME=$HIVE_HOME/../HCatalog 
    export LIB_JARS=$HCAT_HOME/lib/hive-hcatalog-core-xxx.jar,$HCAT_HOME/lib/hive-metastore-xxx.jar,$HCAT_HOME/lib/hive-standalone-metastore-xxx.jar,$HIVE_HOME/lib/hive-exec-xxx.jar,$HCAT_HOME/lib/libfb303-xxx.jar,$HCAT_HOME/lib/slf4j-api-xxx.jar,$HCAT_HOME/lib/jdo-api-xxx.jar,$HCAT_HOME/lib/antlr-runtime-xxx.jar,$HCAT_HOME/lib/datanucleus-api-jdo-xxx.jar,$HCAT_HOME/lib/datanucleus-core-xxx.jar,$HCAT_HOME/lib/datanucleus-rdbms-fi-xxx.jar,$HCAT_HOME/lib/log4j-api-xxx.jar,$HCAT_HOME/lib/log4j-core-xxx.jar,$HIVE_HOME/lib/commons-lang-xxx.jar
    export HADOOP_CLASSPATH=$HCAT_HOME/lib/hive-hcatalog-core-xxx.jar:$HCAT_HOME/lib/hive-metastore-xxx.jar:$HCAT_HOME/lib/hive-standalone-metastore-xxx.jar:$HIVE_HOME/lib/hive-exec-xxx.jar:$HCAT_HOME/lib/libfb303-xxx.jar:$HADOOP_HOME/etc/hadoop:$HCAT_HOME/conf:$HCAT_HOME/lib/slf4j-api-xxx.jar:$HCAT_HOME/lib/jdo-api-xxx.jar:$HCAT_HOME/lib/antlr-runtime-xxx.jar:$HCAT_HOME/lib/datanucleus-api-jdo-xxx.jar:$HCAT_HOME/lib/datanucleus-core-xxx.jar:$HCAT_HOME/lib/datanucleus-rdbms-fi-xxx.jar:$HCAT_HOME/lib/log4j-api-xxx.jar:$HCAT_HOME/lib/log4j-core-xxx.jar:$HIVE_HOME/lib/commons-lang-xxx.jar

    xxx: Indicates the version number of the JAR package. Change the version numbers of the JAR files specified in LIB_JARS and HADOOP_CLASSPATH based on the actual environment.

  5. Prepare for the running:

    1. Use the Hive client to create source table t1 in beeline:

      create table t1(col1 int);

      Insert the following data into t1:

       
          +----------+--+ 
          | t1.col1  | 
          +----------+--+ 
          | 1        | 
          | 1        | 
          | 1        | 
          | 2        | 
          | 2        | 
          | 3        |     
    2. Create destination table t2:

      create table t2(col1 int,col2 int);

    Tables created in this sample project use the default storage format of Hive. Currently, tables whose storage format is ORC are not supported.

  6. Use the Yarn client to submit tasks:

    yarn --config $HADOOP_HOME/etc/hadoop jar $HCAT_CLIENT/hcatalog-example-1.0-SNAPSHOT.jar com.huawei.bigdata.HCatalogExample -libjars $LIB_JARS t1 t2

  7. View the running result. The data in t2 is as follows:

    0: jdbc:hive2://192.168.1.18:2181,192.168.1.> select * from t2; 
     +----------+----------+--+ 
     | t2.col1  | t2.col2  | 
     +----------+----------+--+ 
     | 1        | 3        | 
     | 2        | 2        | 
     | 3        | 1        | 
     +----------+----------+--+