Scenario Description
Scenario Description
Users can customize JDBCServer clients and use JDBC connections to create, load data to, query, and delete data tables.
Data Planning
- Ensure that the JDBCServer service is started in HA mode and at least one instance provides services for external systems. Create the /home/data directory on HDFS, add the files that contain the following content, and upload them to the /home/data directory on the HDFS.
Miranda,32 Karlie,23 Candice,27
- Ensure that the user who starts JDBCServer has permissions to read and write the file.
- Ensure that the hive-site.xml file exists in $SPARK_HOME/conf, and set related parameters based on the actual cluster conditions. Example:
<?xml version="1.0" encoding="UTF-8" standalone="no"?> <configuration> <property> <name>spark.thriftserver.ha.enabled</name> <value>true</value> </property> </configuration> - Change the value of principal in the ThriftServerQueriesTest class to the value of spark.beeline.principal in the $SPARK_HOME/conf/spark-default.conf configuration file of the cluster.
Development Guidelines
- Create the child table in the default database.
- Load data in /home/data to the child table.
- Query data in the child table.
- Delete the child table.
Last Article: Application for Accessing Spark SQL Through JDBC
Next Article: Java Sample Code
Did this article solve your problem?
Thank you for your score!Your feedback would help us improve the website.