Updated on 2023-06-14 GMT+08:00

DLI Spark

Function

The DLI Spark node is used to execute a predefined Spark job.

Parameters

Table 1, Table 2, and Table 3 describe the parameters of the DLI Spark node.

Table 1 Parameters of DLI Spark nodes

Parameter

Mandatory

Description

Node Name

Yes

Name of the node. The value must consist of 1 to 128 characters and contain only letters, digits, and the following special characters: _-/<>

DLI Queue

Yes

Select a queue from the drop-down list box.

Job Type

No

Select a custom image and the corresponding version. This parameter is available only when the DLI queue is a containerized queue.

A custom image is a feature of DLI. You can use the Spark or Flink basic images provided by DLI to pack the dependencies (files, JAR packages, or software) required into an image using Dockerfile, generate a custom image, and release the image to SWR. Then, select the generated image and run the job.

Custom images can change the container runtime environments of Spark and Flink jobs. You can embed private capabilities into custom images to enhance the functions and performance of jobs. .

Job Name

Yes

Name of the DLI Spark job. The name must contain 1 to 64 characters, including only letters, numbers, and underscores (_). The default value is the same as the node name.

Job Running Resources

No

Select the running resource specifications of the job.

  • 8-core, 32 GB memory
  • 16-core, 64 GB memory
  • 32-core, 128 GB memory

Major Job Class

Yes

Name of the major class of the Spark job. When the application type is .jar, the main class name cannot be empty.

Spark program resource package

Yes

JAR file on which the Spark job depends. You can enter the JAR package name or the corresponding OBS path. The format is as follows: obs://Bucket name/Folder name/Package name. Before selecting a resource package, upload the JAR package and its dependency packages to the OBS bucket and create resources on the Manage Resource page. For details, see Creating a Resource.

Resource Type

Yes

Select OBS path or DLI program package.

  • OBS path: The resource package file will not be uploaded to DLI resource management system before the job is executed. The OBS path where the file is located is part of the message body for starting the job. This type is recommended.
  • DLI package: The resource package file will not be uploaded to the DLI resource management system before the job is executed.

Group

No

This parameter is mandatory when Resource Type is set to DLI program package. You can select Use existing, Create new, or Do not use.

Group Name

No

This parameter is mandatory when Resource Type is set to DLI program package.

  • Use existing: Select an existing group.
  • Create new: Enter a user-defined group name.
  • Do not use: Do not select or enter a group name.

Major-Class Entry Parameters

No

User-defined parameters. Separate multiple parameters by Enter.

These parameters can be replaced by global variables. For example, if you create a global variable batch_num on the Global Configuration > Global Variables page, you can use {{batch_num}} to replace a parameter with this variable after the job is submitted.

Spark Job Running Parameters

No

Enter a parameter in the format of key/value. Press Enter to separate multiple key-value pairs. For details about the parameters, see Spark Configuration.

These parameters can be replaced by global variables. For example, if you create a global variable custom_class on the Global Configuration > Global Variables page, you can use "spark.sql.catalog"={{custom_class}} to replace a parameter with this variable after the job is submitted.

NOTE:

The JVM garbage collection algorithm cannot be customized for Spark jobs.

Module Name

No

Dependency modules provided by DLI for executing datasource connection jobs. To access different services, you need to select different modules.

  • CloudTable/MRS HBase: sys.datasource.hbase
  • DDS: sys.datasource.mongo
  • CloudTable/MRS OpenTSDB: sys.datasource.opentsdb
  • DWS: sys.datasource.dws
  • RDS MySQL: sys.datasource.rds
  • RDS PostGre: sys.datasource.rds
  • DCS: sys.datasource.redis
  • CSS: sys.datasource.css

DLI internal modules include:

  • sys.res.dli-v2
  • sys.res.dli
  • sys.datasource.dli-inner-table

Metadata Access

Yes

Whether to access metadata through Spark jobs. For details, see section "Using the Spark Job to Access DLI Metadata" in Data Lake Insight Developer Guide.

Table 2 Advanced parameters

Parameter

Mandatory

Description

Node Status Polling Interval (s)

Yes

How often the system checks completeness of the node task. The value ranges from 1 to 60 seconds.

Max. Node Execution Duration

Yes

Maximum duration of executing a node. When Retry upon Failure is set to Yes for a node, the node can be re-executed for numerous times upon an execution failure within the maximum duration.

Retry upon Failure

Yes

Whether to re-execute a node after the node fails to be executed.

  • Yes: The node will be re-executed after it fails to be executed. The following parameters must be configured:
    • Maximum Retries
    • Retry Interval (seconds)
  • No: The node will not be re-executed. This is the default setting.
NOTE:

If Max. Node Execution Duration is configured for the node, the node will not be executed again after the execution times out. Instead, the node is set to the failure state.

Failure Policy

Yes

Policies to be performed after the node fails to be executed:

  • End the current job execution plan: stops running the current job. The job instance status is Failed.
  • Go to the next node: ignores the execution failure of the current node. The job instance status is Failure ignored.
  • Suspend current job execution plan: suspends running the current job. The job instance status is Waiting.
  • Suspend execution plans of the subsequent nodes: stops running subsequent nodes. The job instance status is Failed.

Dry run

No

If you select this option, the node will not be executed, and a success message will be returned.

Table 3 Lineage

Parameter

Description

Input

Add

Click Add. In the Type drop-down list, select the type to be created. The value can be DWS, OBS, CSS, HIVE, DLI, or CUSTOM.

  • DWS
    • Connection Name: Click . In the displayed dialog box, select a DWS data connection.
    • Database: Click . In the displayed dialog box, select a DWS database.
    • Schema: Click . In the displayed dialog box, select a DWS schema.
    • Table Name: Click . In the displayed dialog box, select a DWS table.
  • OBS
    • Path: Click . In the displayed dialog box, select an OBS path.
  • CSS
    • Cluster Name: Click . In the displayed dialog box, select a CSS cluster.
    • Index: Enter a CSS index name.
  • HIVE
    • Connection Name: Click . In the displayed dialog box, select a HIVE data connection.
    • Database: Click . In the displayed dialog box, select a HIVE database.
    • Table Name: Click . In the displayed dialog box, select a HIVE table.
  • CUSTOM
    • Name: Enter a name of the CUSTOM type.
    • Attribute: Enter an attribute of the CUSTOM type. You can add more than one attribute.
  • DLI
    • Connection Name: Click . In the displayed dialog box, select a DLI data connection.
    • Database: Click . In the displayed dialog box, select a DLI database.
    • Table Name: Click . In the displayed dialog box, select a DLI table.

OK

Click OK to save the parameter settings.

Cancel

Click Cancel to cancel the parameter settings.

Modify

Click to modify the parameter settings. After the modification, save the settings.

Delete

Click to delete the parameter settings.

View Details

Click to view details about the table created based on the input lineage.

Output

Add

Click Add. In the Type drop-down list, select the type to be created. The value can be DWS, OBS, CSS, HIVE, DLI, or CUSTOM.

  • DWS
    • Connection Name: Click . In the displayed dialog box, select a DWS data connection.
    • Database: Click . In the displayed dialog box, select a DWS database.
    • Schema: Click . In the displayed dialog box, select a DWS schema.
    • Table Name: Click . In the displayed dialog box, select a DWS table.
  • OBS
    • Path: Click . In the displayed dialog box, select an OBS path.
  • CSS
    • Cluster Name: Click . In the displayed dialog box, select a CSS cluster.
    • Index: Enter a CSS index name.
  • HIVE
    • Connection Name: Click . In the displayed dialog box, select a HIVE data connection.
    • Database: Click . In the displayed dialog box, select a HIVE database.
    • Table Name: Click . In the displayed dialog box, select a HIVE table.
  • CUSTOM
    • Name: Enter a name of the CUSTOM type.
    • Attribute: Enter an attribute of the CUSTOM type. You can add more than one attribute.
  • DLI
    • Connection Name: Click . In the displayed dialog box, select a DLI data connection.
    • Database: Click . In the displayed dialog box, select a DLI database.
    • Table Name: Click . In the displayed dialog box, select a DLI table.

OK

Click OK to save the parameter settings.

Cancel

Click Cancel to cancel the parameter settings.

Modify

Click to modify the parameter settings. After the modification, save the settings.

Delete

Click to delete the parameter settings.

View Details

Click to view details about the table created based on the output lineage.