Help Center/
MapReduce Service/
Developer Guide (Normal_Earlier Than 3.x)/
Spark Development Guide/
Developing a Spark Application/
Application for Accessing Spark SQL Through JDBC/
Scenario Description
Updated on 2022-06-01 GMT+08:00
Scenario Description
Scenario Description
Users can customize JDBCServer clients and use JDBC connections to create, load data to, query, and delete data tables.
Data Planning
- Ensure that the JDBCServer service is started in HA mode and at least one instance provides services for external systems. Create the /home/data directory on HDFS, add the files that contain the following content, and upload them to the /home/data directory on the HDFS.
Miranda,32 Karlie,23 Candice,27
- Ensure that the user who starts JDBCServer has permissions to read and write the file.
- Ensure that the hive-site.xml file exists in $SPARK_HOME/conf, and set related parameters based on the actual cluster conditions.
Example:
<?xml version="1.0" encoding="UTF-8" standalone="no"?> <configuration> <property> <name>spark.thriftserver.ha.enabled</name> <value>true</value> </property> </configuration>
- Change the value of principal in the ThriftServerQueriesTest class to the value of spark.beeline.principal in the $SPARK_HOME/conf/spark-default.conf configuration file of the cluster.
Development Guidelines
- Create the child table in the default database.
- Load data in /home/data to the child table.
- Query data in the child table.
- Delete the child table.
Parent topic: Application for Accessing Spark SQL Through JDBC
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
The system is busy. Please try again later.
For any further questions, feel free to contact us through the chatbot.
Chatbot