Java Example Code
Development Description
- Prerequisites
A datasource connection has been created and bound to a queue on the DLI management console.
Hard-coded or plaintext passwords pose significant security risks. To ensure security, encrypt your passwords, store them in configuration files or environment variables, and decrypt them when needed.
- Code implementation
- Import dependencies.
- Maven dependency involved
1 2 3 4 5
<dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-sql_2.11</artifactId> <version>2.3.2</version> </dependency>
- Import dependency packages.
1
import org.apache.spark.sql.SparkSession;
- Maven dependency involved
- Create a session.
1
SparkSession sparkSession = SparkSession.builder().appName("datasource-rds").getOrCreate();
- Import dependencies.
- Connecting to data sources through SQL APIs
- Create a table to connect to an RDS data source and set connection parameters.
1 2 3 4 5 6 7
sparkSession.sql( "CREATE TABLE IF NOT EXISTS dli_to_rds USING JDBC OPTIONS ( 'url'='jdbc:mysql://to-rds-1174404209-cA37siB6.datasource.com:3306', // Set this parameter to the actual URL. 'dbtable'='test.customer', 'user'='root', // Set this parameter to the actual user. 'password'='######', // Set this parameter to the actual password. 'driver'='com.mysql.jdbc.Driver')")
For details about the parameters for creating a table, see Table 1.
- Insert data.
1
sparkSession.sql("insert into dli_to_rds values (1,'John',24)");
- Query data.
1
sparkSession.sql("select * from dli_to_rd").show();
Response
- Create a table to connect to an RDS data source and set connection parameters.
- Submitting a Spark job
- Generate a JAR package based on the code and upload the package to DLI.
- In the Spark job editor, select the corresponding dependency module and execute the Spark job.
- After the Spark job is created, click Execute in the upper right corner of the console to submit the job. If the message "Spark job submitted successfully." is displayed, the Spark job is successfully submitted. You can view the status and logs of the submitted job on the Spark Jobs page.
- If the Spark version is 2.3.2 (will be offline soon) or 2.4.5, specify the Module to sys.datasource.rds when you submit a job.
- If the Spark version is 3.1.1, you do not need to select a module. Configure Spark parameters (--conf).
spark.driver.extraClassPath=/usr/share/extension/dli/spark-jar/datasource/rds/*
spark.executor.extraClassPath=/usr/share/extension/dli/spark-jar/datasource/rds/*
- For details about how to submit a job on the console, see the description of the table "Parameters for selecting dependency resources" in Creating a Spark Job.
- For details about how to submit a job through an API, see the description of the modules parameter in Table 2 "Request parameters" in Creating a Batch Processing Job.
Complete Example Code
Connecting to data sources through SQL APIs
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
import org.apache.spark.sql.SparkSession; public class java_rds { public static void main(String[] args) { SparkSession sparkSession = SparkSession.builder().appName("datasource-rds").getOrCreate(); // Create a data table for DLI-associated RDS sparkSession.sql("CREATE TABLE IF NOT EXISTS dli_to_rds USING JDBC OPTIONS ('url'='jdbc:mysql://192.168.6.150:3306','dbtable'='test.customer','user'='root','password'='**','driver'='com.mysql.jdbc.Driver')"); //*****************************SQL model*********************************** //Insert data into the DLI data table sparkSession.sql("insert into dli_to_rds values(3,'Liu',21),(4,'Joey',34)"); //Read data from DLI data table sparkSession.sql("select * from dli_to_rds"); //drop table sparkSession.sql("drop table dli_to_rds"); sparkSession.close(); } } |
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.