Development Plan
Scenarios
You can customize JDBCServer clients and use JDBC connections to create, load data to, query, and delete data tables.
Data Preparation
Upload the data file to HDFS.
- Ensure that the JDBCServer service has been started in multi-active instance HA mode and at least one instance provides connections for client. On the HDFS client of the Linux OS, create a text file named data. The file content is as follows:
Miranda,32 Karlie,23 Candice,27
- Create a directory in the HDFS directory, for example, /home, and run the following commands to upload the data file to the directory:
- Ensure that the user whose starts the JDBCServer has the read and write permission on the file.
- Ensure that the hive-site.xml file exists in classpath, and set parameters required for the client connection. For details about parameters required for the JDBCServer, see Spark JDBCServer APIs.
Development Idea
- Create a child table in the default database.
- Add data in /home/data to the child table.
- Query data in the child table.
- Delete the child table.
Packaging the Project
- Use the Maven tool provided by IDEA to pack the project and generate a JAR file. For details, see Writing and Running the Spark Program in the Linux Environment.
- Upload the JAR file to any directory (for example, /opt/female/) on the server where the Spark client is located.
Running Tasks
Go to the Spark client directory and run the java -cp command to run the code (The class name and file name must be the same as those in the actual code. The following is only an example).
- Run the Java sample code:
java -cp $SPARK_HOME/jars/*:$SPARK_HOME/jars/hive/*:$SPARK_HOME/conf:/opt/female/SparkThriftServerJavaExample-1.0.jar com.huawei.bigdata.spark.examples.ThriftServerQueriesTest $SPARK_HOME/conf/hive-site.xml $SPARK_HOME/conf/spark-defaults.conf
- Run the Scala sample code:
java -cp $SPARK_HOME/jars/*:$SPARK_HOME/jars/hive/*:$SPARK_HOME/conf:/opt/female/SparkThriftServerExample-1.0.jar com.huawei.bigdata.spark.examples.ThriftServerQueriesTest $SPARK_HOME/conf/hive-site.xml $SPARK_HOME/conf/spark-defaults.conf
After the SSL feature of ZooKeeper is enabled for the cluster (check the ssl.enabled parameter of the ZooKeeper service), add the -Dzookeeper.client.secure=true -Dzookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty parameter to the command:
java -Dzookeeper.client.secure=true -Dzookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty -cp $SPARK_HOME/jars/*:$SPARK_HOME/jars/hive/*:$SPARK_HOME/conf:/opt/female/SparkThriftServerJavaExample-1.0.jar com.huawei.bigdata.spark.examples.ThriftServerQueriesTest $SPARK_HOME/conf/hive-site.xml $SPARK_HOME/conf/spark-defaults.conf
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot