Debugging the HCatalog Sample Program
The Hive HCatalog application can run in the Linux environment where the Hive and Yarn clients are installed. After the program code is developed, you can upload the JAR package to the prepared Linux operating environment.
Procedure
- In the lower left corner of the IDEA page, click Terminal to access the terminal. Run the mvn clean install command to compile the package.
If BUILD SUCCESS is displayed, the compilation is successful, as shown in the following figure. The hcatalog-example-*.jar package is generated in the target directory of the sample project.
The preceding JAR file names are for reference only. The actual names may vary.
- Upload the hcatalog-example-*.jar file generated in the target directory in the 1 to the specified directory on Linux, for example, /opt/hive_client, marked as $HCAT_CLIENT, and ensure that the Hive and YARN clients have been installed. Execute environment variables for the HCAT_CLIENT to take effect.
export HCAT_CLIENT=/opt/hive_client
- Run the following command to configure environment parameters (client installation path /opt/client is used as an example):
export HADOOP_HOME=/opt/client/HDFS/hadoop export HIVE_HOME=/opt/client/Hive/Beeline export HCAT_HOME=$HIVE_HOME/../HCatalog export LIB_JARS=$HCAT_HOME/lib/hive-hcatalog-core-xxx.jar,$HCAT_HOME/lib/hive-metastore-xxx.jar,$HCAT_HOME/lib/hive-standalone-metastore-xxx.jar,$HIVE_HOME/lib/hive-exec-xxx.jar,$HCAT_HOME/lib/libfb303-xxx.jar,$HCAT_HOME/lib/slf4j-api-xxx.jar,$HCAT_HOME/lib/jdo-api-xxx.jar,$HCAT_HOME/lib/antlr-runtime-xxx.jar,$HCAT_HOME/lib/datanucleus-api-jdo-xxx.jar,$HCAT_HOME/lib/datanucleus-core-xxx.jar,$HCAT_HOME/lib/datanucleus-rdbms-fi-xxx.jar,$HCAT_HOME/lib/log4j-api-xxx.jar,$HCAT_HOME/lib/log4j-core-xxx.jar,$HIVE_HOME/lib/commons-lang-xxx.jar export HADOOP_CLASSPATH=$HCAT_HOME/lib/hive-hcatalog-core-xxx.jar:$HCAT_HOME/lib/hive-metastore-xxx.jar:$HCAT_HOME/lib/hive-standalone-metastore-xxx.jar:$HIVE_HOME/lib/hive-exec-xxx.jar:$HCAT_HOME/lib/libfb303-xxx.jar:$HADOOP_HOME/etc/hadoop:$HCAT_HOME/conf:$HCAT_HOME/lib/slf4j-api-xxx.jar:$HCAT_HOME/lib/jdo-api-xxx.jar:$HCAT_HOME/lib/antlr-runtime-xxx.jar:$HCAT_HOME/lib/datanucleus-api-jdo-xxx.jar:$HCAT_HOME/lib/datanucleus-core-xxx.jar:$HCAT_HOME/lib/datanucleus-rdbms-fi-xxx.jar:$HCAT_HOME/lib/log4j-api-xxx.jar:$HCAT_HOME/lib/log4j-core-xxx.jar:$HIVE_HOME/lib/commons-lang-xxx.jar
xxx: Indicates the version number of the JAR package. Change the version numbers of the JAR files specified in LIB_JARS and HADOOP_CLASSPATH based on the actual environment.
- Prepare for the running:
- Use the Hive client to create source table t1 in beeline:
Insert the following data into t1:
+----------+--+ | t1.col1 | +----------+--+ | 1 | | 1 | | 1 | | 2 | | 2 | | 3 |
- Create destination table t2:
Tables created in this sample project use the default storage format of Hive. Currently, tables whose storage format is ORC are not supported.
- Use the Hive client to create source table t1 in beeline:
- Use the Yarn client to submit tasks:
yarn --config $HADOOP_HOME/etc/hadoop jar $HCAT_CLIENT/hcatalog-example-1.0-SNAPSHOT.jar com.huawei.bigdata.HCatalogExample -libjars $LIB_JARS t1 t2
- View the running result. The data in t2 is as follows:
0: jdbc:hive2://192.168.1.18:2181,192.168.1.> select * from t2; +----------+----------+--+ | t2.col1 | t2.col2 | +----------+----------+--+ | 1 | 3 | | 2 | 2 | | 3 | 1 | +----------+----------+--+
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot