DLI Spark
Functions
The DLI Spark node is used to execute a predefined Spark job.
For details about how to use the DLI Spark node, see Developing a DLI Spark Job.
Parameters
Table 1, Table 2, and Table 3 describe the parameters of the DLI Sparknode node.
Parameter |
Mandatory |
Description |
---|---|---|
Node Name |
Yes |
Name of a node. The name must contain 1 to 128 characters, including only letters, numbers, underscores (_), hyphens (-), slashes (/), less-than signs (<), and greater-than signs (>). |
DLI Queue |
Yes |
Select a queue from the drop-down list box.
NOTE:
|
Spark Version |
No |
Select the version of the Spark component. If there is no specific requirement on the version, use the default version 2.3.2. |
Job Type |
No |
Type of the Spark image used by the job. The following options are available: Basic, AI-enhanced, and Image. If you select Image, you need to set the image name and version. This parameter is available only when the DLI queue is a containerized queue. A custom image is a feature of DLI. You can use the Spark or Flink basic images provided by DLI to pack the dependencies (files, JAR packages, or software) required into an image using Dockerfile, generate a custom image, and release the image to SWR. Then, select the generated image and run the job. Custom images can change the container runtime environments of Spark and Flink jobs. You can embed private capabilities into custom images to enhance the functions and performance of jobs. |
Job Name |
Yes |
Name of the DLI Spark job. The name must contain 1 to 64 characters, including only letters, numbers, and underscores (_). The default value is the same as the node name. |
Job Running Resources |
No |
Select the running resource specifications of the job.
|
Major Job Class |
Yes |
Name of the major class of the Spark job. When the application type is .jar, the main class name cannot be empty. |
Spark program resource package |
Yes |
JAR file on which the Spark job depends. You can enter the JAR package name or the corresponding OBS path. The format is as follows: obs://Bucket name/Folder name/Package name. Before selecting a resource package, upload the JAR package and its dependency packages to the OBS bucket and create resources on the Manage Resource page. For details, see Creating a Resource. |
Resource Type |
Yes |
Select OBS path or DLI program package.
|
Group |
No |
This parameter is mandatory when Resource Type is set to DLI program package. You can select Use existing, Create new, or Do not use. |
Group Name |
No |
This parameter is mandatory when Resource Type is set to DLI program package.
|
Major-Class Entry Parameters |
No |
User-defined parameters. Separate multiple parameters by Enter. These parameters can be replaced by global variables. For example, if you create a global variable batch_num on the Global Configuration > Global Variables page, you can use {{batch_num}} to replace a parameter with this variable after the job is submitted. |
Spark Job Running Parameters |
No |
Enter a parameter in the format of key/value. Press Enter to separate multiple key-value pairs. For details about the parameters, see Spark Configuration. These parameters can be replaced by global variables. For example, if you create a global variable custom_class on the Global Configuration > Global Variables page, you can use "spark.sql.catalog"={{custom_class}} to replace a parameter with this variable after the job is submitted.
NOTE:
The JVM garbage collection algorithm cannot be customized for Spark jobs. |
Module Name |
No |
Dependency modules provided by DLI for executing datasource connection jobs. To access different services, you need to select different modules.
DLI internal modules include:
|
Metadata Access |
Yes |
Whether to access metadata through Spark jobs. For details, see Using the Spark Job to Access DLI Metadata. |
Parameter |
Mandatory |
Description |
---|---|---|
Node Status Polling Interval (s) |
Yes |
How often the system check completeness of the node. The value ranges from 1 to 60 seconds. |
Max. Node Execution Duration |
Yes |
Execution timeout interval for the node. If retry is configured and the execution is not complete within the timeout interval, the node will be executed again. |
Retry upon Failure |
Yes |
Whether to re-execute a node if it fails to be executed. Possible values:
|
Policy for Handling Subsequent Nodes If the Current Node Fails |
Yes |
Operation that will be performed if the node fails to be executed. Possible values:
|
Enable Dry Run |
No |
If you select this option, the node will not be executed, and a success message will be returned. |
Task Groups |
No |
Select a task group. If you select a task group, you can control the maximum number of concurrent nodes in the task group in a fine-grained manner in scenarios where a job contains multiple nodes, a data patching task is ongoing, or a job is rerunning. |
Parameter |
Description |
---|---|
Input |
|
Add |
Click Add. In the Type drop-down list, select the type to be created. The value can be DWS, OBS, CSS, HIVE, DLI, or CUSTOM. |
OK |
Click OK to save the parameter settings. |
Cancel |
Click Cancel to cancel the parameter settings. |
Modify |
Click to modify the parameter settings. After the modification, save the settings. |
Delete |
Click to delete the parameter settings. |
View Details |
Click to view details about the table created based on the input lineage. |
Output |
|
Add |
Click Add. In the Type drop-down list, select the type to be created. The value can be DWS, OBS, CSS, HIVE, DLI, or CUSTOM. |
OK |
Click OK to save the parameter settings. |
Cancel |
Click Cancel to cancel the parameter settings. |
Modify |
Click to modify the parameter settings. After the modification, save the settings. |
Delete |
Click to delete the parameter settings. |
View Details |
Click to view details about the table created based on the output lineage. |
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot