Updated on 2024-04-29 GMT+08:00

DLI Spark

Functions

The DLI Spark node is used to execute a predefined Spark job.

For details about how to use the DLI Spark node, see Developing a DLI Spark Job.

Parameters

Table 1, Table 2, and Table 3 describe the parameters of the DLI Sparknode node.

Table 1 Parameters of DLI Spark nodes

Parameter

Mandatory

Description

Node Name

Yes

Name of a node. The name must contain 1 to 128 characters, including only letters, numbers, underscores (_), hyphens (-), slashes (/), less-than signs (<), and greater-than signs (>).

DLI Queue

Yes

Select a queue from the drop-down list box.

NOTE:
  • During job creation, a sub-user can only select a queue that has been allocated to the user.
  • The version of the default Spark component of the default DLI queue is not up-to-date, and an error may be reported indicating that a table creation statement cannot be executed. In this case, you are advised to create a queue to run your tasks. To enable the execution of table creation statements in the default queue, contact the customer service or technical support of the DLI service.
  • The default queue default of DLI is only used for trial. It may be occupied by multiple users at a time. Therefore, it is possible that you fail to obtain the resource for related operations. If the execution takes a long time or fails, you are advised to try again during off-peak hours or use a self-built queue to run the job.

Spark Version

No

Select the version of the Spark component. If there is no specific requirement on the version, use the default version 2.3.2.

Job Type

No

Type of the Spark image used by the job. The following options are available: Basic, AI-enhanced, and Image.

If you select Image, you need to set the image name and version. This parameter is available only when the DLI queue is a containerized queue.

A custom image is a feature of DLI. You can use the Spark or Flink basic images provided by DLI to pack the dependencies (files, JAR packages, or software) required into an image using Dockerfile, generate a custom image, and release the image to SWR. Then, select the generated image and run the job.

Custom images can change the container runtime environments of Spark and Flink jobs. You can embed private capabilities into custom images to enhance the functions and performance of jobs.

Job Name

Yes

Name of the DLI Spark job. The name must contain 1 to 64 characters, including only letters, numbers, and underscores (_). The default value is the same as the node name.

Job Running Resources

No

Select the running resource specifications of the job.

  • 8-core, 32 GB memory
  • 16-core, 64 GB memory
  • 32-core, 128 GB memory

Major Job Class

Yes

Name of the major class of the Spark job. When the application type is .jar, the main class name cannot be empty.

Spark program resource package

Yes

JAR file on which the Spark job depends. You can enter the JAR package name or the corresponding OBS path. The format is as follows: obs://Bucket name/Folder name/Package name. Before selecting a resource package, upload the JAR package and its dependency packages to the OBS bucket and create resources on the Manage Resource page. For details, see Creating a Resource.

Resource Type

Yes

Select OBS path or DLI program package.

  • OBS path: The resource package file will not be uploaded to DLI resource management system before the job is executed. The OBS path where the file is located is part of the message body for starting the job. This type is recommended.
  • DLI package: The resource package file will be uploaded to the DLI resource management system before the job is executed.

Group

No

This parameter is mandatory when Resource Type is set to DLI program package. You can select Use existing, Create new, or Do not use.

Group Name

No

This parameter is mandatory when Resource Type is set to DLI program package.

  • Use existing: Select an existing group.
  • Create new: Enter a user-defined group name.
  • Do not use: Do not select or enter a group name.

Major-Class Entry Parameters

No

User-defined parameters. Separate multiple parameters by Enter.

These parameters can be replaced by global variables. For example, if you create a global variable batch_num on the Global Configuration > Global Variables page, you can use {{batch_num}} to replace a parameter with this variable after the job is submitted.

Spark Job Running Parameters

No

Enter a parameter in the format of key/value. Press Enter to separate multiple key-value pairs. For details about the parameters, see Spark Configuration.

These parameters can be replaced by global variables. For example, if you create a global variable custom_class on the Global Configuration > Global Variables page, you can use "spark.sql.catalog"={{custom_class}} to replace a parameter with this variable after the job is submitted.

NOTE:

The JVM garbage collection algorithm cannot be customized for Spark jobs.

Module Name

No

Dependency modules provided by DLI for executing datasource connection jobs. To access different services, you need to select different modules.

  • CloudTable/MRS HBase: sys.datasource.hbase
  • DDS: sys.datasource.mongo
  • CloudTable/MRS OpenTSDB: sys.datasource.opentsdb
  • DWS: sys.datasource.dws
  • RDS MySQL: sys.datasource.rds
  • RDS PostGre: sys.datasource.rds
  • DCS: sys.datasource.redis
  • CSS: sys.datasource.css

DLI internal modules include:

  • sys.res.dli-v2
  • sys.res.dli
  • sys.datasource.dli-inner-table

Metadata Access

Yes

Whether to access metadata through Spark jobs. For details, see Using the Spark Job to Access DLI Metadata.

Table 2 Advanced parameters

Parameter

Mandatory

Description

Node Status Polling Interval (s)

Yes

How often the system check completeness of the node. The value ranges from 1 to 60 seconds.

Max. Node Execution Duration

Yes

Execution timeout interval for the node. If retry is configured and the execution is not complete within the timeout interval, the node will be executed again.

Retry upon Failure

Yes

Whether to re-execute a node if it fails to be executed. Possible values:

  • Yes: The node will be re-executed, and the following parameters must be configured:
    • Retry upon Timeout
    • Maximum Retries
    • Retry Interval (seconds)
  • No: The node will not be re-executed. This is the default setting.
    NOTE:

    If retry is configured for a job node and the timeout duration is configured, the system allows you to retry a node when the node execution times out.

    If a node is not re-executed when it fails upon timeout, you can go to the Default Configuration page to modify this policy.

    Retry upon Timeout is displayed only when Retry upon Failure is set to Yes.

Policy for Handling Subsequent Nodes If the Current Node Fails

Yes

Operation that will be performed if the node fails to be executed. Possible values:

  • Suspend execution plans of the subsequent nodes: stops running subsequent nodes. The job instance status is Failed.
  • End the current job execution plan: stops running the current job. The job instance status is Failed.
  • Go to the next node: ignores the execution failure of the current node. The job instance status is Failure ignored.
  • Suspend the current job execution plan: If the current job instance is in abnormal state, the subsequent nodes of this node and the subsequent job instances that depend on the current job are in waiting state.

Enable Dry Run

No

If you select this option, the node will not be executed, and a success message will be returned.

Table 3 Lineage

Parameter

Description

Input

Add

Click Add. In the Type drop-down list, select the type to be created. The value can be DWS, OBS, CSS, HIVE, DLI, or CUSTOM.

OK

Click OK to save the parameter settings.

Cancel

Click Cancel to cancel the parameter settings.

Modify

Click to modify the parameter settings. After the modification, save the settings.

Delete

Click to delete the parameter settings.

View Details

Click to view details about the table created based on the input lineage.

Output

Add

Click Add. In the Type drop-down list, select the type to be created. The value can be DWS, OBS, CSS, HIVE, DLI, or CUSTOM.

OK

Click OK to save the parameter settings.

Cancel

Click Cancel to cancel the parameter settings.

Modify

Click to modify the parameter settings. After the modification, save the settings.

Delete

Click to delete the parameter settings.

View Details

Click to view details about the table created based on the output lineage.