Updated on 2025-04-28 GMT+08:00

Notebook

Functions

The Notebook node is used to execute a Notebook job predefined in DLI.

Constraints

This function depends on OBS.

Parameters

Table 1 and Table 2 describe the parameters of the Notebook node.

Table 1 Parameter of the NoteBook node

Parameter

Mandatory

Description

Node Name

Yes

Name of a node. The name must contain 1 to 128 characters, including only letters, numbers, underscores (_), hyphens (-), slashes (/), less-than signs (<), and greater-than signs (>).

Spark Job Name

Yes

Name of the DLI Spark job. The name must contain 1 to 64 characters, including only letters, numbers, and underscores (_). The default value is the same as the node name.

Data Lake Insight Queue

Yes

Select a queue from the drop-down list box.

NOTE:
  • During job creation, a sub-user can only select a queue that has been allocated to the user.
  • The version of the default Spark component of the default DLI queue is not up-to-date, and an error may be reported indicating that a table creation statement cannot be executed. In this case, you are advised to create a queue to run your tasks. To enable the execution of table creation statements in the default queue, contact the customer service or technical support of the DLI service.
  • The default queue default of DLI is only used for trial. It may be occupied by multiple users at a time. Therefore, it is possible that you fail to obtain the resource for related operations. If the execution takes a long time or fails, you are advised to try again during off-peak hours or use a self-built queue to run the job.

Job Type

No

You can set this parameter as needed after selecting a DLI queue.

Type of the Spark image used by the job. The following options are available:

  • Basic
  • Image

    If you select Image, select an image. Its version is automatically displayed. You can create images by following the instructions in Image Management.

Spark Versions

Yes

This parameter is mandatory when a DLI queue is selected.

Select a Spark version.

  • 3.3.1
  • 3.1.1

    When scheduling notebook files in DataArts Factory, you can run the files only on DLI Spark 3.3.1.

Job Running Resources

No

Select the running resource specifications of the job.

  • 8-core, 32 GB memory
  • 16-core, 64 GB memory
  • 32-core, 128 GB memory

Input directory

Yes

Select a path in the OBS bucket for running the notebook file. The absolute path of the input directory can contain a maximum of 1,024 characters.

Input Notebook File

Yes

Select the notebook file in the OBS input directory. The file is in .ipynb format. The absolute path can contain a maximum of 2,048 characters.

Notebook File Output Directory

Yes

Select a path in the OBS bucket for storing the running result of the notebook file. The absolute path of the output directory can contain a maximum of 1,024 characters.

Output Notebook File Name

Yes

Enter the name of the output notebook file. The name can contain a maximum of 256 characters. The file is in .ipynb format.

Input Notebook Job Parameters

No

Configure the parameters for running the notebook job.

Spark program resource package

No

Enter a parameter in the format of key=value. Press Enter to separate multiple key-value pairs. For details about the parameters, see Spark Configuration.

These parameters can be replaced by global variables. For example, if you create a global variable custom_class on the Global Configuration > Global Variables page, you can use "spark.sql.catalog"={{custom_class}} to replace a parameter with this variable after the job is submitted.

NOTE:

The JVM garbage collection algorithm cannot be customized for Spark jobs.

Table 2 Advanced parameters

Parameter

Mandatory

Description

Node Status Polling Interval (s)

Yes

How often the system check completeness of the node. The value ranges from 1 to 60 seconds.

Max. Node Execution Duration

Yes

Execution timeout interval for the node. If retry is configured and the execution is not complete within the timeout interval, the node will be executed again.

Retry upon Failure

Yes

Whether to re-execute a node if it fails to be executed. Possible values:

  • Yes: The node will be re-executed, and the following parameters must be configured:
    • Retry upon Timeout
    • Maximum Retries
    • Retry Interval (seconds)
  • No: The node will not be re-executed. This is the default setting.
    NOTE:

    If retry is configured for a job node and the timeout duration is configured, the system allows you to retry a node when the node execution times out.

    If a node is not re-executed when it fails upon timeout, you can go to the Default Configuration page to modify this policy.

    Retry upon Timeout is displayed only when Retry upon Failure is set to Yes.

Policy for Handling Subsequent Nodes If the Current Node Fails

Yes

Operation that will be performed if the node fails to be executed. Possible values:

  • Suspend execution plans of the subsequent nodes: stops running subsequent nodes. The job instance status is Failed.
  • End the current job execution plan: stops running the current job. The job instance status is Failed.
  • Go to the next node: ignores the execution failure of the current node. The job instance status is Failure ignored.
  • Suspend the current job execution plan: If the current job instance is in abnormal state, the subsequent nodes of this node and the subsequent job instances that depend on the current job are in waiting state.

Enable Dry Run

No

If you select this option, the node will not be executed, and a success message will be returned.

Task Groups

No

Select a task group. If you select a task group, you can control the maximum number of concurrent nodes in the task group in a fine-grained manner in scenarios where a job contains multiple nodes, a data patching task is ongoing, or a job is rerunning.