Updated on 2022-07-11 GMT+08:00

Scala Example Code

Function

The JDBC interface of the user-defined client is used to submit the data analysis task and return the results.

Example Code

  1. Define an SQL statement. The SQL statement must be a single statement that cannot contain the semicolon (;). For example,

    val sqlList = new ArrayBuffer[String]
    sqlList += "CREATE TABLE CHILD (NAME STRING, AGE INT) " +
    "ROW FORMAT DELIMITED FIELDS TERMINATED BY ','"
    sqlList += "LOAD DATA LOCAL INPATH '/home/data' INTO TABLE CHILD"
    sqlList += "SELECT * FROM child"
    sqlList += "DROP TABLE child"

    The data file in the sample project must be placed into the Home directory of the host where the JDBCServer is located.

  2. Assemble the JDBC URL.

    val securityConfig = ";saslQop=auth-conf;auth=KERBEROS;principal=spark2x/hadoop.<system domain name>@<system domain name>" + ";"
    
    val config: Configuration = new Configuration()
    config.addResource(new Path(args(0)))
    val zkUrl = config.get("spark.deploy.zookeeper.url")
    
    var zkNamespace: String = null
    zkNamespace = fileInfo.getProperty("spark.thriftserver.zookeeper.namespace")
    //Remove redundant characters from configuration items
    if (zkNamespace != null) zkNamespace = zkNamespace.substring(1)
    val sb = new StringBuilder("jdbc:hive2://"
      + zkUrl
      + ";serviceDiscoveryMode=zooKeeper;zooKeeperNamespace="
      + zkNamespace
      + securityConfig)
    val url = sb.toString()

    The default validity period of KERBEROS authentication is one day. After the validity period, the authentication needs to be performed again if you want to connect the client and JDBCServer. You can add the user principal and user keytab authentication information to url to ensure that the authentication is performed each time the connection is established. For example, add user.principal=sparkuser;user.keytab=/opt/client/user.keytab to url.

  3. Load the Hive JDBC driver. Obtain the JDBC connection, execute the HQL, export the obtained column name and results to the console, and close the JDBC connection.

    The zk.quorum in the connection string can be replaced by spark.deploy.zookeeper.url in the configuration file.

    In network congestion, configure the timeout of the connection between the client and JDBCServer to avoid the suspending of the client due to timeless wait of the return from the server.

    Before executing the DriverManager.getConnection script to obtain the JDBC connection, add the DriverManager.setLoginTimeout(n) script to configure the timeout. n indicates the timeout length of waiting for the return from the server. The unit is second, the type is Int, and the default value is 0 (indicating never timing out).

    def executeSql(url: String, sqls: Array[String]): Unit = {
    //Load the Hive JDBC driver.
    Class.forName("org.apache.hive.jdbc.HiveDriver").newInstance()
    
    var connection: Connection = null
    var statement: PreparedStatement = null
    try {
      connection = DriverManager.getConnection(url)
      for (sql <- sqls) {
        println(s"---- Begin executing sql: $sql ----")
        statement = connection.prepareStatement(sql)
    
        val result = statement.executeQuery()
    
        val resultMetaData = result.getMetaData
        val colNum = resultMetaData.getColumnCount
        for (i <- 1 to colNum) {
          print(resultMetaData.getColumnLabel(i) + "\t")
        }
        println()
    
        while (result.next()) {
          for (i <- 1 to colNum) {
            print(result.getString(i) + "\t")
          }
          println()
        }
        println(s"---- Done executing sql: $sql ----")
      }
    } finally {
      if (null != statement) {
        statement.close()
      }
    
      if (null != connection) {
        connection.close()
      }
    }
    }