Running the JDBC Client and Viewing Results
- Run the mvn package command to generate a JAR file, for example, hive-examples-1.0.jar, and obtain it from the target directory in the project directory.
- Create a directory as the running directory in the running and commissioning environment, for example, /opt/hive_examples (Linux), and create the conf subdirectory in the directory.
Copy the hive-examples-1.0.jar file exported in 1 to /opt/hive_examples.
Copy the configuration file from the client to the conf directory. For a security cluster with Kerberos authentication enabled, copy the user.keytab and krb5.conf files obtained in 5 to the /opt/hive_examples/conf directory. For a cluster with Kerberos authentication disabled, you do not need to copy the user.keytab and krb5.conf files. Copy the ${HIVE_HOME}/../config/hiveclient.properties file to the /opt/hive_examples/conf directory.
cd /opt/hive_examples/conf cp /opt/client/Hive/config/hiveclient.properties .
- Prepare the JAR packages related to the sample program.
Create a directory (for example, /opt/hive_examples/lib) in the commissioning environment to store dependency JAR packages. Copy all packages in ${HIVE_HOME}/lib/ to the directory, delete the derby-10.10.2.0.jar package. (The JAR package version number varies according to the site requirements.)
mkdir /opt/hive_examples/lib cp ${HIVE_HOME}/lib/* /opt/hive_examples/lib rm -f /opt/hive_examples/lib/derby-10.10.2.0.jar
- In Linux, run the following command to run the sample program:
chmod +x /opt/hive_examples -R cd /opt/hive_examples source /opt/client/bigdata_env java -cp .:hive-examples-1.0.jar:/opt/hive_examples/conf:/opt/hive_examples/lib/*:/opt/client/HDFS/hadoop/lib/* com.huawei.bigdata.hive.example.ExampleMain
- In the CLI, view the HiveQL query results in the example code.
If the following information is displayed, the sample project execution is successful on Linux.
Create table success! _c0 0 Delete table success!
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.