Development Plan
Overview
You can customize JDBCServer clients and use JDBC connections to create, load data to, query, and delete data tables.
Preparing Data
- Ensure that the JDBCServer service has been started in multi-active instance HA mode and at least one instance provides connections for client. Create the /home/data file on each JDBCServer node. The file content is as follows:
Miranda,32 Karlie,23 Candice,27
- Ensure that the user who starts JDBCServer has the read and write permissions.
- Ensure that the hive-site.xml file exists in classpath and set parameters required for the client connection. For details about parameters required for JDBCServer, see Spark JDBCServer APIs.
Development Guidelines
- Create the child table in the default database.
- Load data in /home/data to the child table.
- Query data in the child table.
- Delete the child table.
Preparations
For clusters with the security mode enabled, the Spark Core sample code needs to read two files (user.keytab and krb5.conf). The user.keytab and krb5.conf files are authentication files in the security mode. Download the authentication credentials of the user principal on the FusionInsight Manager page. The user in the sample code is sparkuser, change the value to the prepared development user name.
Packaging the Project
- Upload the krb5.conf and user.keytab files to the server where the client is located.
- Use the Maven tool provided by IDEA to pack the project and generate a JAR file. For details, see Commissioning a Spark Application in a Linux Environment.
Before compilation and packaging, change the paths of the user.keytab and krb5.conf files in the sample code to the actual paths on the client server. For example, /opt/female/user.keytab and /opt/female/krb5.conf.
- Upload the JAR file to any directory (for example, /opt/female/) on the server where the Spark client is located.
Running the Task
Go to the Spark client directory and run the java -cp command to run the code (The class name and file name must be the same as those in the actual code. The following is only an example).
- Run the Java sample code:
java -cp $SPARK_HOME/jars/*:$SPARK_HOME/jars/hive/*:$SPARK_HOME/conf:/opt/female/SparkThriftServerJavaExample-1.0.jar com.huawei.bigdata.spark.examples.ThriftServerQueriesTest $SPARK_HOME/conf/hive-site.xml $SPARK_HOME/conf/spark-defaults.conf
- Run the Scala sample code:
java -cp $SPARK_HOME/jars/*:$SPARK_HOME/jars/hive/*:$SPARK_HOME/conf:/opt/female/SparkThriftServerExample-1.0.jar com.huawei.bigdata.spark.examples.ThriftServerQueriesTest $SPARK_HOME/conf/hive-site.xml $SPARK_HOME/conf/spark-defaults.conf
After the SSL feature of ZooKeeper is enabled for the cluster (check the ssl.enabled parameter of the ZooKeeper service), add parameters -Dzookeeper.client.secure=true and -Dzookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty to the command:
java -Dzookeeper.client.secure=true -Dzookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty -cp $SPARK_HOME/jars/*:$SPARK_HOME/jars/hive/*:$SPARK_HOME/conf:/opt/female/SparkThriftServerJavaExample-1.0.jar com.huawei.bigdata.spark.examples.ThriftServerQueriesTest $SPARK_HOME/conf/hive-site.xml $SPARK_HOME/conf/spark-defaults.conf
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot