Java Example Code
Prerequisites
A datasource connection has been created on the DLI management console. For details, see Enhanced Datasource Connections.
CSS Non-Security Cluster
- Development description
- Code implementation
- Constructing dependency information and creating a Spark session
- Import dependencies.
<dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-sql_2.11</artifactId> <version>2.3.2</version> </dependency>
Import dependency packages.1
import org.apache.spark.sql.SparkSession;
- Create a session.
1
SparkSession sparkSession = SparkSession.builder().appName("datasource-css").getOrCreate();
- Import dependencies.
- Constructing dependency information and creating a Spark session
- Connecting to data sources through SQL APIs
- Create a table to connect to a CSS data source.
sparkSession.sql("create table css_table(id long, name string) using css options( 'es.nodes' = '192.168.9.213:9200', 'es.nodes.wan.only' = 'true','resource' ='/mytest')");
- Insert data.
sparkSession.sql("insert into css_table values(18, 'John'),(28, 'Bob')");
- Query data.
sparkSession.sql("select * from css_table").show();
- Delete the datasource connection table.
sparkSession.sql("drop table css_table");
- Create a table to connect to a CSS data source.
- Submitting a Spark job
- Generate a JAR package based on the code file and upload the package to DLI.
For details about console operations, see Creating a Package. For details about API operations, see Uploading a Package Group.
- In the Spark job editor, select the corresponding dependency module and execute the Spark job.
For details about console operations, see Creating a Spark Job. For details about API operations, see Creating a Batch Processing Job.
- If the Spark version is 2.3.2 (will be offline soon) or 2.4.5, specify the Module to sys.datasource.css when you submit a job.
- If the Spark version is 3.1.1, you do not need to select a module. Configure Spark parameters (--conf).
spark.driver.extraClassPath=/usr/share/extension/dli/spark-jar/datasource/css/*
spark.executor.extraClassPath=/usr/share/extension/dli/spark-jar/datasource/css/*
- For details about how to submit a job on the console, see the description of the table "Parameters for selecting dependency resources" in Creating a Spark Job.
- For details about how to submit a job through an API, see the description of the modules parameter in Table 2 "Request parameters" in Creating a Batch Processing Job.
- Generate a JAR package based on the code file and upload the package to DLI.
- Code implementation
- Complete example code
- Maven dependency
<dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-sql_2.11</artifactId> <version>2.3.2</version> </dependency>
- Connecting to data sources through SQL APIs
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
import org.apache.spark.sql.*; public class java_css_unsecurity { public static void main(String[] args) { SparkSession sparkSession = SparkSession.builder().appName("datasource-css-unsecurity").getOrCreate(); // Create a DLI data table for DLI-associated CSS sparkSession.sql("create table css_table(id long, name string) using css options( 'es.nodes' = '192.168.15.34:9200', 'es.nodes.wan.only' = 'true', 'resource' = '/mytest')"); //*****************************SQL model*********************************** // Insert data into the DLI data table sparkSession.sql("insert into css_table values(18, 'John'),(28, 'Bob')"); // Read data from DLI data table sparkSession.sql("select * from css_table").show(); // drop table sparkSession.sql("drop table css_table"); sparkSession.close(); } }
- Maven dependency
CSS Security Cluster
- Preparations
Generate the keystore.jks and truststore.jks files and upload them to the OBS bucket. For details, see CSS Security Cluster Configuration.
- Description of the development with HTTPS disabled
If HTTPS is disabled, keystore.jks and truststore.jks files are not required. You only need to set SSL access parameters and credentials.
- Constructing dependency information and creating a Spark session
- Import dependencies.
<dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-sql_2.11</artifactId> <version>2.3.2</version> </dependency>
Import dependency packages.1
import org.apache.spark.sql.SparkSession;
- Create a session.
1
SparkSession sparkSession = SparkSession.builder().appName("datasource-css").getOrCreate();
- Import dependencies.
- Connecting to data sources through SQL APIs
- Create a table to connect to a CSS data source.
1
sparkSession.sql("create table css_table(id long, name string) using css options( 'es.nodes' = '192.168.9.213:9200', 'es.nodes.wan.only' = 'true', 'resource' = '/mytest','es.net.ssl'='false','es.net.http.auth.user'='admin','es.net.http.auth.pass'='*******')");
- For details about the parameters for creating a CSS datasource connection table, see Table 1.
- In the preceding example, HTTPS access is disabled for the CSS security cluster. Therefore, you need to set es.net.ssl to false. es.net.http.auth.user and es.net.http.auth.pass are the username and password set during cluster creation, respectively.
- Insert data.
1
sparkSession.sql("insert into css_table values(18, 'John'),(28, 'Bob')");
- Query data.
1
sparkSession.sql("select * from css_table").show();
- Delete the datasource connection table.
sparkSession.sql("drop table css_table");
- Create a table to connect to a CSS data source.
- Submitting a Spark job
- Generate a JAR package based on the code file and upload the package to DLI.
For details about console operations, see Creating a Package. For details about API operations, see Uploading a Package Group.
- In the Spark job editor, select the corresponding dependency module and execute the Spark job.
For details about console operations, see Creating a Spark Job. For details about API operations, see Creating a Batch Processing Job.
- When submitting a job, you need to specify a dependency module named sys.datasource.css.
- For details about how to submit a job on the console, see Parameters for selecting dependency resources in the Data Lake Insight User Guide.
- For details about how to submit a job through an API, see the modules parameter in Request parameters of Creating a Batch Processing Job in the Data Lake Insight API Reference.
- Generate a JAR package based on the code file and upload the package to DLI.
- Complete example code
- Maven dependency
<dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-sql_2.11</artifactId> <version>2.3.2</version> </dependency>
- Maven dependency
- Constructing dependency information and creating a Spark session
- Description of development with HTTPS enabled
- Constructing dependency information and creating a Spark session
- Import dependencies.
<dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-sql_2.11</artifactId> <version>2.3.2</version> </dependency>
Import dependency packages.
1 2 3 4 5
import org.apache.spark.SparkFiles; import org.apache.spark.sql.SparkSession; import java.io.IOException; import java.nio.file.Files; import java.nio.file.Paths;
- Create a session.
1
SparkSession sparkSession = SparkSession.builder().appName("datasource-css").getOrCreate();
- Copy the certificate.
sparkSession.sparkContext().addFile("obs://Bucket name/Address/transport-keystore.jks"); sparkSession.sparkContext().addFile("obs://Bucket name/Address/truststore.jks"); // Obtain the path of the current working directory. String pathUser = System.getProperty("user.dir"); System.out.println("path_user is " + pathUser); // Obtain the file name. String esTransportKeystoreFileName = SparkFiles.get("transport-keystore.jks"); String esTruststoreFileName = SparkFiles.get("truststore.jks"); System.out.println("esTransportKeystoreFileName is " + esTransportKeystoreFileName); System.out.println("esTruststoreFileName is " + esTruststoreFileName); // Combine the file path. String esTransportKeystoreLocalPath = pathUser + "/" + "transport-keystore.jks"; String esTruststoreLocalPath = pathUser + "/" + "truststore.jks"; System.out.println("esTransportKeystoreLocalPath is " + esTransportKeystoreLocalPath); System.out.println("esTruststoreLocalPath is " + esTruststoreLocalPath); try { // Copy the keystore file. copyFile(esTransportKeystoreFileName, esTransportKeystoreLocalPath); // Copy the truststore file. copyFile(esTruststoreFileName, esTruststoreLocalPath); // Wait for a few minutes. Thread.sleep(2000); System.out.println("Files copied successfully:"); System.out.println("es_transport-keystore.jks: " + esTransportKeystoreLocalPath); System.out.println("es_truststore.jks: " + esTruststoreLocalPath); } catch (IOException | InterruptedException e) { e.printStackTrace(); }
- Import dependencies.
- Connecting to data sources through SQL APIs
- Create a table to connect to a CSS data source.
1 2 3
sparkSession.sql("create table css_table(id long, name string) using css options( 'es.nodes' = '192.168.13.189:9200', 'es.nodes.wan.only' = 'true', 'resource' = '/mytest','es.net.ssl'='true','es.net.ssl.keystore.location' = 'file://" + esTransportKeystoreLocalPath + "','es.net.ssl.keystore.pass' = '**', 'es.net.ssl.truststore.location'='file://" + esTruststoreLocalPath + "', 'es.net.ssl.truststore.pass'='***','es.net.http.auth.user'='admin','es.net.http.auth.pass'='**')");
For details about the parameters for creating a CSS datasource connection table, see Table 1.
- Insert data.
1
sparkSession.sql("insert into css_table values(18, 'John'),(28, 'Bob')");
- Query data.
1
sparkSession.sql("select * from css_table").show();
- Delete the datasource connection table.
sparkSession.sql("drop table css_table");
- Create a table to connect to a CSS data source.
- Submitting a Spark job
- Generate a JAR package based on the code file and upload the package to DLI.
For details about console operations, see Creating a Package. For details about API operations, see Uploading a Package Group.
- In the Spark job editor, select the corresponding dependency module and execute the Spark job.
For details about console operations, see Creating a Spark Job. For details about API operations, see Creating a Batch Processing Job.
- When submitting a job, you need to specify a dependency module named sys.datasource.css.
- For details about how to submit a job on the console, see Parameters for selecting dependency resources in the Data Lake Insight User Guide.
- For details about how to submit a job through an API, see the modules parameter in Request parameters of Creating a Batch Processing Job in the Data Lake Insight API Reference.
- Generate a JAR package based on the code file and upload the package to DLI.
- Complete example code
- Maven dependency
<dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-sql_2.11</artifactId> <version>2.3.2</version> </dependency>
- Connecting to data sources through SQL APIs
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
import org.apache.spark.SparkFiles; import org.apache.spark.sql.SparkSession; import java.io.IOException; import java.nio.file.Files; import java.nio.file.Paths; public class java_css_security_httpson { public static void main(String[] args) { SparkSession sparkSession = SparkSession.builder().appName("datasource-css").getOrCreate(); sparkSession.sparkContext().addFile("obs://Bucket name/Address/transport-keystore.jks"); sparkSession.sparkContext().addFile("obs://Bucket name/Address/css/truststore.jks"); // Obtain the path of the current working directory. String pathUser = System.getProperty("user.dir"); System.out.println("path_user is " + pathUser); // Obtain the file name. String esTransportKeystoreFileName = SparkFiles.get("transport-keystore.jks"); String esTruststoreFileName = SparkFiles.get("truststore.jks"); System.out.println("esTransportKeystoreFileName is " + esTransportKeystoreFileName); System.out.println("esTruststoreFileName is " + esTruststoreFileName); // Combine the file path. String esTransportKeystoreLocalPath = pathUser + "/" + "transport-keystore.jks"; String esTruststoreLocalPath = pathUser + "/" + "truststore.jks"; System.out.println("esTransportKeystoreLocalPath is " + esTransportKeystoreLocalPath); System.out.println("esTruststoreLocalPath is " + esTruststoreLocalPath); try { // Copy the keystore file. copyFile(esTransportKeystoreFileName, esTransportKeystoreLocalPath); // Copy the truststore file. copyFile(esTruststoreFileName, esTruststoreLocalPath); // Wait for a few minutes. Thread.sleep(2000); System.out.println("Files copied successfully:"); System.out.println("es_transport-keystore.jks: " + esTransportKeystoreLocalPath); System.out.println("es_truststore.jks: " + esTruststoreLocalPath); } catch (IOException | InterruptedException e) { e.printStackTrace(); } // Create a DLI data table for DLI-associated CSS sparkSession.sql("create table css_table(id long, name string) using css options( 'es.nodes' = '192.168.13.189:9200', 'es.nodes.wan.only' = 'true', 'resource' = '/mytest','es.net.ssl'='true','es.net.ssl.keystore.location' = 'file://" + esTransportKeystoreLocalPath + "','es.net.ssl.keystore.pass' = '**','es.net.ssl.truststore.location'='file://" + esTruststoreLocalPath + "','es.net.ssl.truststore.pass'='**','es.net.http.auth.user'='admin','es.net.http.auth.pass'='**')"); //*****************************SQL model*********************************** // Insert data into the DLI data table sparkSession.sql("insert into css_table values(34, 'Yuan'),(28, 'Kids')"); // Read data from DLI data table sparkSession.sql("select * from css_table").show(); // drop table sparkSession.sql("drop table css_table"); sparkSession.close(); } private static void copyFile(String sourcePath, String destinationPath) throws IOException { // Copy a file from remote storage to local storage. byte[] fileContent = Files.readAllBytes(Paths.get(sourcePath)); Files.write(Paths.get(destinationPath), fileContent); } }
- Maven dependency
- Constructing dependency information and creating a Spark session
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot