Updated on 2024-08-30 GMT+08:00

DLI SQL

Functions

The DLI SQL node is used to transfer SQL statements to DLI SQL for data source analysis and exploration.

Working Principles

This node enables you to execute DLI statements during periodical or real-time job scheduling. You can use parameter variables to perform incremental import and process partitions for your data warehouses.

Parameters

Table 1, Table 2, and Table 3 describe the parameters of the DLI SQLnode node.

Table 1 Parameters of DLI SQL nodes

Parameter

Mandatory

Description

SQL Statement or Script

Yes

You can select SQL statement or SQL script.

  • SQL Statement

    Click the text box under SQL statement and enter the SQL statement to be executed.

  • SQL Script

    Select a script to be executed. If the script is not created, create and develop the script by repeating steps Creating a Script and Developing an SQL Script.

    NOTE:

    If you select the SQL statement mode, the DataArts Factory module cannot parse the parameters contained in the SQL statement.

DLI Data Directory

No

Select the DLI data directory.

  • Default DLI data directory dli
  • Metadata catalog that has been created in LakeFormation associated with DLI.

Database Name

Yes

If you select SQL script:

Database that is configured in the SQL script. The value can be changed.

If you select SQL Statement:

  • If you select the default DLI data directory dli, select a DLI database and tables.
  • If you select a metadata catalog that has been created in LakeFormation associated with DLI, select a LakeFormation database and tables.

DLI Environmental Variable

No

  • The environment variable must start with hoodie., dli.sql., dli.ext., dli.jobs., spark.sql., or spark.scheduler.pool.
  • If the key of the environment variable is dli.sql.shuffle.partitions or dli.sql.autoBroadcastJoinThreshold, the environment variable cannot contain the greater than (>) or less than (<) sign.
  • If the key of the environment variable is dli.sql.autoBroadcastJoinThreshold, the value of the key must be an integer. If the key of the environment variable is dli.sql.shuffle.partitions, the value of the key must be a positive integer.
  • If a parameter with the same name is configured in both a job and a script, the parameter value configured in the job will overwrite that configured in the script.
    NOTE:

    User-defined parameter that applies to the job. Currently, the following configuration items are supported:

    • dli.sql.autoBroadcastJoinThreshold: specifies the data volume threshold to use BroadcastJoin. If the data volume exceeds the threshold, BroadcastJoin will be automatically enabled.
    • dli.sql.shuffle.partitions: specifies the number of partitions during shuffling.
    • dli.sql.cbo.enabled: specifies whether to enable the CBO optimization policy.
    • dli.sql.cbo.joinReorder.enabled: specifies whether join reordering is allowed when CBO optimization is enabled.
    • dli.sql.multiLevelDir.enabled: specifies whether to query the content in subdirectories if there are subdirectories in the specified directory of an OBS table or in the partition directory of an OBS partition table. By default, the content in subdirectories is not queried.
    • dli.sql.dynamicPartitionOverwrite.enabled: specifies that only partitions used during data query are overwritten and other partitions are not deleted.

Queue Name

Yes

Name of the DLI queue configured in the SQL script. The value can be changed.

You can create a resource queue using either of the following methods:
  • Click . On the Queue Management page of DLI, create a resource queue.
  • Go to the DLI console to create a resource queue.
NOTE:
  • During job creation, a sub-user can only select a queue that has been allocated to the user.
  • The version of the default Spark component of the default DLI queue is not up-to-date, and an error may be reported indicating that a table creation statement cannot be executed. In this case, you are advised to create a queue to run your tasks. To enable the execution of table creation statements in the default queue, contact the customer service or technical support of the DLI service.
  • The default queue default of DLI is only used for trial. It may be occupied by multiple users at a time. Therefore, it is possible that you fail to obtain the resource for related operations. If the execution takes a long time or fails, you are advised to try again during off-peak hours or use a self-built queue to run the job.

Script Parameter

No

If the associated SQL script uses a parameter, the parameter name is displayed. Set the parameter value in the text box next to the parameter name. The parameter value can be an EL expression.

If the parameters of the associated SQL script are changed, click to refresh the parameters.

Node Name

Yes

Name of the SQL script. The value can be changed. The rules are as follows:

Name of a node. The name must contain 1 to 128 characters, including only letters, numbers, underscores (_), hyphens (-), slashes (/), less-than signs (<), and greater-than signs (>).

By default, the node name is the same as that of the selected script. If you want the node name to be different from the script name, disable this function by referring to Disabling Auto Node Name Change.

Record Dirty Data

Yes

Click to specify whether to record dirty data.

  • If you select , dirty data will be recorded.
  • If you do not select , dirty data will not be recorded.
    NOTE:

    Dirty data refers to bad records which cannot be loaded to DLI due to incompatible data types, empty data, or incompatible data formats.

    If you choose to record dirty data, bad records are imported to the OBS path for storing dirty data instead of the target table.

    • If no OBS path for storing DLI dirty data has been configured in the workspace, the dirty data generated during DLI SQL execution is written to the dlf-log-{projectId} bucket by default.
    • To set the path for storing DLI dirty data, go to the Workspaces page and edit the workspace. For details, see Configuring an OBS Bucket.
Table 2 Advanced parameters

Parameter

Mandatory

Description

Node Status Polling Interval (s)

Yes

How often the system check completeness of the node. The value ranges from 1 to 60 seconds.

Max. Node Execution Duration

Yes

Execution timeout interval for the node. If retry is configured and the execution is not complete within the timeout interval, the node will be executed again.

Retry upon Failure

Yes

Whether to re-execute a node if it fails to be executed. Possible values:

  • Yes: The node will be re-executed, and the following parameters must be configured:
    • Retry upon Timeout
    • Maximum Retries
    • Retry Interval (seconds)
  • No: The node will not be re-executed. This is the default setting.
    NOTE:

    If retry is configured for a job node and the timeout duration is configured, the system allows you to retry a node when the node execution times out.

    If a node is not re-executed when it fails upon timeout, you can go to the Default Configuration page to modify this policy.

    Retry upon Timeout is displayed only when Retry upon Failure is set to Yes.

Policy for Handling Subsequent Nodes If the Current Node Fails

Yes

Operation that will be performed if the node fails to be executed. Possible values:

  • Suspend execution plans of the subsequent nodes: stops running subsequent nodes. The job instance status is Failed.
  • End the current job execution plan: stops running the current job. The job instance status is Failed.
  • Go to the next node: ignores the execution failure of the current node. The job instance status is Failure ignored.
  • Suspend the current job execution plan: If the current job instance is in abnormal state, the subsequent nodes of this node and the subsequent job instances that depend on the current job are in waiting state.

Enable Dry Run

No

If you select this option, the node will not be executed, and a success message will be returned.

Task Groups

No

Select a task group. If you select a task group, you can control the maximum number of concurrent nodes in the task group in a fine-grained manner in scenarios where a job contains multiple nodes, a data patching task is ongoing, or a job is rerunning.

Table 3 Lineage

Parameter

Description

Input

Add

Click Add. In the Type drop-down list, select the type to be created. The value can be DWS, OBS, CSS, HIVE, DLI, or CUSTOM.

OK

Click OK to save the parameter settings.

Cancel

Click Cancel to cancel the parameter settings.

Modify

Click to modify the parameter settings. After the modification, save the settings.

Delete

Click to delete the parameter settings.

View Details

Click to view details about the table created based on the input lineage.

Output

Add

Click Add. In the Type drop-down list, select the type to be created. The value can be DWS, OBS, CSS, HIVE, DLI, or CUSTOM.

OK

Click OK to save the parameter settings.

Cancel

Click Cancel to cancel the parameter settings.

Modify

Click to modify the parameter settings. After the modification, save the settings.

Delete

Click to delete the parameter settings.

View Details

Click to view details about the table created based on the output lineage.