Updated on 2022-06-01 GMT+08:00

Scenario Description

Scenario Description

Users can customize JDBCServer clients and use JDBC connections to create, load data to, query, and delete data tables.

Data Planning

  1. Ensure that the JDBCServer service is started in HA mode and at least one instance provides services for external systems. Create the /home/data directory on HDFS, add the files that contain the following content, and upload them to the /home/data directory on the HDFS.

    Miranda,32
    Karlie,23
    Candice,27

  2. Ensure that the user who starts JDBCServer has permissions to read and write the file.
  3. Ensure that the hive-site.xml file exists in $SPARK_HOME/conf, and set related parameters based on the actual cluster conditions.

    Example:
    <?xml version="1.0" encoding="UTF-8" standalone="no"?>
    <configuration>
        <property>
            <name>spark.thriftserver.ha.enabled</name>
            <value>true</value>
        </property>
    </configuration>

  4. Change the value of principal in the ThriftServerQueriesTest class to the value of spark.beeline.principal in the $SPARK_HOME/conf/spark-default.conf configuration file of the cluster.

Development Guidelines

  1. Create the child table in the default database.
  2. Load data in /home/data to the child table.
  3. Query data in the child table.
  4. Delete the child table.