Updated on 2022-02-22 GMT+08:00

DLI Spark

Functions

The DLI Spark node is used to execute a predefined Spark job.

Parameters

Table 1 and Table 2 describe the parameters of the DLI Spark node.

Table 1 Parameters of DLI Spark nodes

Parameter

Mandatory

Description

Node Name

Yes

Name of the node. Must consist of 1 to 128 characters and contain only letters, digits, underscores (_), hyphens (-), slashes (/), less-than signs (<), and greater-than signs (>).

DLI Queue

Yes

Select a queue from the drop-down list box.

Job Name

Yes

Name of the DLI Spark job. It must consist of 1 to 64 characters and contain only letters, digits, and underscores (_). The default value is the same as the node name.

Job Running Resources

No

Select the running resource specifications of the job.

  • 8-core, 32 GB memory
  • 16-core, 64 GB memory
  • 32-core, 128 GB memory

Major Job Class

Yes

Main class of the DLI Spark job, that is, the main class of the JAR package.

Spark program resource package

Yes

JAR package of the user-defined Spark application. Before selecting a resource package, you need to upload the JAR package and its dependency packages to the OBS bucket and create resources on the resource management page. For details, see Creating a Resource.

Major-Class Entry Parameters

No

Enter the entry parameters of the program. Press Enter to separate the parameters.

Spark Job Running Parameters

No

Enter a parameter in the format of key/value. Press Enter to separate multiple key-value pairs. For details about the parameters, see Spark Configuration.

Module Name

No

Dependency modules provided by DLI for executing datasource connection jobs. To access different services, you need to select different modules.

  • CloudTable/MRS HBase: sys.datasource.hbase
  • CloudTable/MRS OpenTSDB: sys.datasource.opentsdb
  • RDS MySQL: sys.datasource.rds
  • RDS PostGre: sys.datasource.rds
  • DWS: sys.datasource.dws
  • CSS: sys.datasource.css
Table 2 Advanced parameters

Parameter

Mandatory

Description

Node Status Polling Interval (s)

Yes

Specifies how often the system check completeness of the node task. The value ranges from 1 to 60 seconds.

Max. Node Execution Duration

Yes

Execution timeout interval for the node. If retry is configured and the execution is not complete within the timeout interval, the node will not be retried and is set to the failed state.

Retry upon Failure

Yes

Indicates whether to re-execute a node task if its execution fails. Possible values:

  • Yes: The node task will be re-executed, and the following parameters must be configured:
    • Maximum Retries
    • Retry Interval (seconds)
  • No: The node task will not be re-executed. This is the default setting.
NOTE:

If Timeout Interval is configured for the node, the node will not be executed again after the execution times out. Instead, the node is set to the failure state.

Failure Policy

Yes

Operation that will be performed if the node task fails to be executed. Possible values:

  • End the current job execution plan
  • Go to the next job
  • Suspend the current job execution plan
  • Suspend execution plans of the current and subsequent nodes