Help Center > > Developer Guide> Developing a DLI Datasource Connection Using a Spark Job> Interconnecting with CSS Non-security Clusters (By Scala)> Detailed Development Description

Detailed Development Description

Updated at: Mar 17, 2020 GMT+08:00

Prerequisites

A datasource connection has been created on the DLI management console.

Construct dependency information and create a Spark session.

  1. Import dependencies
    Involved Maven dependency
    1
    2
    3
    4
    5
    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-sql_2.11</artifactId>
      <version>2.3.2</version>
    </dependency>
    
    Dependencies related to import
    1
    2
    import org.apache.spark.sql.{Row, SaveMode, SparkSession}
    import org.apache.spark.sql.types.{IntegerType, StringType, StructField, StructType}
    
  2. Create a session
    1
    val sparkSession = SparkSession.builder().getOrCreate()
    

Connecting to Datasources Through SQL APIs

  1. Create a table to connect to CSS datasource
    1
    2
    3
    4
    sparkSession.sql("create table css_table(id int, name string) using css options(
    	'nodes' 'to-css-1174404221-Y2bKVIqY.datasource.com:9200',
    	'nodes.wan.only'='true',
    	'resource' '/mytest/css')")
    
    Table 1 Parameters for creating a table

    Parameter

    Description

    es.nodes

    Specifies the CSS connection address. You need to create a datasource connection first. For details about how to create a datasource connection on the management console, see Basic Datasource Connections and Enhanced Datasource Connections in the Data Lake Insight User Guide.

    After a basic datasource connection is created, the returned IP address is used.

    After an enhanced datasource connection is created, use the intranet IP address provided by CSS. The address format is IP1:PORT1,IP2:PORT2. For details about how to obtain the address, see CSS cluster information.

    resource

    The resource is used to specify the CSS datasource connection name. You can use /index/type to specify the resource location (for easier understanding, the index can be seen as database and type as table).

    NOTE:
    • In ES 6.X, a single index supports only one type, and the type name can be customized.
    • In ES 7.X, a single index uses _doc as the type name and cannot be customized. To access ES 7.X, set this parameter to index.

    pushdown

    Indicates whether the press function of CSS is enabled. The default value is set to true. If there are a large number of I/O transfer tables, the pushdown can be enabled to reduce I/Os when the where filtering conditions are met.

    strict

    Indicates whether the CSS pushdown is strict. The default value is set to false. In exact match scenarios, more I/Os are reduced than pushdown.

    batch.size.entries

    Maximum number of entries that can be inserted to a batch processing. The default value is 1000. If the size of a single data record is so large that the number of data records in the bulk storage reaches the upper limit of the data amount of a single batch processing, the system stops storing data and submits the data based on the batch.size.bytes.

    batch.size.bytes

    Maximum amount of data in a single batch processing. The default value is 1 MB. If the size of a single data record is so small that the number of data records in the bulk storage reaches the upper limit of the data amount of a single batch processing, the system stops storing data and submits the data based on the batch.size.entries.

    es.nodes.wan.only

    Indicates whether to access the Elasticsearch node using only the domain name. The default value is false. If a basic datasource connection address is used as the es.nodes, set this parameter to true. If the original internal IP address provided by CSS is used as the es.nodes, you do not need to set this parameter or set it to false.

    es.mapping.id

    Specifies a field whose value is used as the document ID in the Elasticsearch node.

    NOTE:
    • The document ID in the same /index/type is unique. If a field that functions as a document ID has duplicate values, the document with the duplicate ID will be overwritten when the ES is inserted.
    • This feature can be used as a fault tolerance solution. When data is being inserted, the DLI job fails and some data has been inserted into ES. The data is redundant. If Document id is set, the last redundant data will be overwritten when the DLI job is executed again.

    batch.size.entries and batch.size.bytes limit the number of data records and data volume respectively.

    Figure 1 CSS cluster information
  2. Insert data
    1
    sparkSession.sql("insert into css_table values(13, 'John'),(22, 'Bob')")
    
  3. Query data
    1
    2
    val dataFrame = sparkSession.sql("select * from css_table")
    dataFrame.show()
    

    Before data is inserted:

    After data is inserted:

  4. Delete the datasource connection table
    1
    sparkSession.sql("drop table css_table")
    

Connecting to Datasources Through DataFrame APIs

  1. Configure datasource connection
    1
    2
    val resource = "/mytest/css"
    val nodes = "to-css-1174405013-Ht7O1tYf.datasource.com:9200"
    
  2. Create a schema and add data to the schema
    1
    2
    val schema = StructType(Seq(StructField("id", IntegerType, false), StructField("name", StringType, false)))
    val rdd = sparkSession.sparkContext.parallelize(Seq(Row(12, "John"),Row(21,"Bob")))
    
  3. Import data to CSS
    1
    2
    3
    4
    5
    6
    7
    val dataFrame_1 = sparkSession.createDataFrame(rdd, schema)
    dataFrame_1.write 
      .format("css") 
      .option("resource", resource) 
      .option("nodes", nodes) 
      .mode(SaveMode.Append) 
      .save()
    

    The value of SaveMode can be one of the following:

    • ErrorIfExis: If the data already exists, the system throws an exception.
    • Overwrite: If the data already exists, the original data will be overwritten.
    • Append: If the data already exists, the system saves the new data.
    • Ignore: If the data already exists, no operation is required. This is similar to the SQL statement CREATE TABLE IF NOT EXISTS.
  4. Read data from CSS
    1
    2
    val dataFrameR = sparkSession.read.format("css").option("resource",resource).option("nodes", nodes).load()
    dataFrameR.show()
    

    Before data is inserted:

    After data is inserted:

Submitting a Spark Job

  1. Generate a JAR package based on the code and upload the package to DLI. For details about console operations, see the Data Lake Insight User Guide. For API references, see Uploading a Resource Package in the Data Lake Insight API Reference.
  2. In the Spark job editor, select the corresponding dependency and execute the Spark job. For details about console operations, see the Data Lake Insight User Guide. For API references, see Creating a Session (Recommended) and Creating a Batch Processing Job in the Data Lake Insight API Reference.
    • When submitting a job, you need to specify a dependency module named sys.datasource.css.
    • For details about how to submit a job on the console, see Table 6-Dependency Resources parameter description in the Data Lake Insight User Guide.
    • For details about how to submit a job through an API, see the modules parameter in Table 2-Request parameter description of Creating a Session and Creating a Batch Processing Job in the Data Lake Insight API Reference.

Did you find this page helpful?

Submit successfully!

Thank you for your feedback. Your feedback helps make our documentation better.

Failed to submit the feedback. Please try again later.

Which of the following issues have you encountered?







Please complete at least one feedback item.

Content most length 200 character

Content is empty.

OK Cancel