Help Center/ Data Lake Insight/ API Reference/ SQL Job-related APIs/ Submitting a SQL Job (Recommended)
Updated on 2025-08-06 GMT+08:00

Submitting a SQL Job (Recommended)

Function

This API is used to submit jobs to a queue using SQL statements.

The job types support DDL, DCL, IMPORT, QUERY, and INSERT. The IMPORT function is the same as that described in Importing Data (Deprecated). The difference lies in the implementation method.

Additionally, you can use other APIs to query and manage jobs. For details, see the following sections:

This API is synchronous if job_type in the response message is DCL.

URI

  • URI format

    POST /v1.0/{project_id}/jobs/submit-job

  • Parameter description
    Table 1 URI parameter

    Parameter

    Mandatory

    Type

    Description

    project_id

    Yes

    String

    Definition

    Project ID, which is used for resource isolation. For how to obtain a project ID, see Obtaining a Project ID.

    Example: 48cc2c48765f481480c7db940d6409d1

    Constraints

    None

    Range

    The value can contain up to 64 characters. Only letters and digits are allowed.

    Default Value

    None

Request Parameters

Table 2 Request parameters

Parameter

Mandatory

Type

Description

sql

Yes

String

Definition

SQL statement to be executed

Constraints

None

Range

None

Default Value

None

currentdb

No

String

Definition

Database where the SQL statement is executed. This parameter does not need to be configured during database creation.

Constraints

None

Range

The value cannot contain only digits or start with an underscore (_). Only digits, letters, and underscores (_) are allowed.

Default Value

None

queue_name

No

String

Definition

Name of the queue to which the job is to be submitted

Constraints

None

Range

The name cannot contain only digits or start with an underscore (_). Only digits, letters, and underscores (_) are allowed.

Default Value

None

conf

No

Array of strings

Definition

You can set the parameter in key-value pairs.

Constraints

None

Range

For details about the supported configuration items, see Table 3.

Default Value

None

tags

No

Array of objects

Definition

Job tags. For details, see Table 4.

Constraints

None

Range

None

Default Value

None

engine_type

No

String

Definition

Type of the engine that executes jobs.

Constraints

None

Range

The options are spark and hetuEngine. The default value is spark.

  • spark: Spark engine
  • hetuEngine: HetuEngine engine

For details about the engine types and descriptions, see DLI Overview.

Default Value

spark

Table 3 Configuration parameters description

Parameter

Description

spark.sql.files.maxRecordsPerFile

Definition

Maximum number of records to be written into a single file. If the value is zero or negative, there is no limit.

Constraints

None

Range

None

Default Value

0

spark.sql.autoBroadcastJoinThreshold

Definition

Maximum size of the table that displays all working nodes when a connection is executed. You can set this parameter to -1 to disable the display.

Constraints

Currently, only the configuration unit metastore table that runs the ANALYZE TABLE COMPUTE statistics noscan command and the file-based data source table that directly calculates statistics based on data files are supported.

Range

None

Default Value

209715200

spark.sql.shuffle.partitions

Definition

Default number of partitions used to filter data for join or aggregation.

Constraints

None

Range

None

Default Value

200

spark.sql.dynamicPartitionOverwrite.enabled

Definition

Whether DLI deletes all partitions that meet the conditions before overwriting the partitions

Constraints

None

Range

  • When set to false, DLI will delete all partitions that meet the conditions before overwriting them.

    For example, if you set false and use INSERT OVERWRITE to write partition 2021-02 to a partitioned table that has the 2021-01 partition, this partition will be deleted.

  • When set to true, DLI will not delete partitions in advance, but will overwrite partitions with data written during runtime.

Default Value

false

spark.sql.files.maxPartitionBytes

Definition

Maximum number of bytes to be packed into a single partition when a file is read.

Constraints

None

Range

None

Default Value

134217728

spark.sql.badRecordsPath

Definition

Path of bad records

Constraints

None

Range

None

Default Value

None

spark.sql.legacy.correlated.scalar.query.enabled

Definition

Controls the behavior of correlated subqueries.

Constraints

None

Range

  • If set to true:
    • When there is no duplicate data in a subquery, executing a correlated subquery does not require deduplication from the subquery's result.
    • If there is duplicate data in a subquery, executing a correlated subquery will result in an error. To resolve this, the subquery's result must be deduplicated using functions such as max() or min().
  • If set to false:

    Regardless of whether there is duplicate data in a subquery, executing a correlated subquery requires deduplicating the subquery's result using functions such as max() or min(). Otherwise, an error will occur.

Default Value

false

dli.jobs.sql.resubmit.enable

Definition

You can control whether Spark SQL jobs are resubmitted in the event of driver failure or queue restart by setting this parameter.

Constraints

If set to true, there may be data consistency issues when performing idempotent operations such as INSERT (for example, insert into, load data, update). This means that if the driver fails and the job is retried, the data that was already inserted before the driver failure may be overwritten again.

Range

  • false: Disables job retry, all types of commands will not be resubmitted, and the job will be marked as failed once the driver fails.
  • true: Enables job retry, meaning all types of jobs will be resubmitted in the event of driver failure.

Default Value

null

spark.sql.optimizer.dynamicPartitionPruning.enabled

Definition

This parameter is used to control whether to enable dynamic partition pruning. Dynamic partition pruning can help reduce the amount of data that needs to be scanned and improve query performance when executing SQL queries.

Constraints

None

Range

  • When set to true, dynamic partition pruning is enabled. SQL automatically detects and deletes partitions that do not meet the WHERE clause conditions during query. This is useful for tables that have a large number of partitions.
  • If SQL queries contain a large number of nested left join operations and the table has a large number of dynamic partitions, a large number of memory resources may be consumed during data parsing. As a result, the memory of the driver node is insufficient and there are frequent Full GCs.

    To avoid such issues, you can disable dynamic partition pruning by setting this parameter to false.

    However, disabling this optimization may reduce query performance. Once disabled, Spark does not automatically prune the partitions that do not meet the requirements.

Default Value

true

Table 4 tags parameters

Parameter

Mandatory

Type

Description

key

Yes

String

Definition

Tag key

Constraints

None

Range

A tag key can contain up to 128 characters, cannot start or end with a space, and cannot start with _sys_. Only letters, digits, spaces, and the following special characters are allowed: _.:+-@

Default Value

None

value

Yes

String

Definition

Tag value

Constraints

None

Range

A tag value can contain up to 255 characters. Only letters, digits, spaces, and the following special characters are allowed: _.:+-@

Default Value

None

Response Parameters

Table 5 Response parameters

Parameter

Mandatory

Type

Description

is_success

Yes

Boolean

Definition

Whether the request is successfully executed. true indicates that the request is successfully executed.

Range

None

message

Yes

String

Definition

System prompt. If the execution succeeds, this parameter may be left blank.

Range

None

job_id

Yes

String

Definition

ID of a job returned after a job is generated and submitted using SQL statements. The job ID can be used to query the job status and results.

Range

None

job_type

Yes

String

Definition

Job type

Range

DDL, DCL, IMPORT, EXPORT, QUERY, and INSERT.

  • DDL: jobs that create, modify, and delete metadata files
  • DCL: jobs that grant and revoke permissions
  • IMPORT: jobs that import external data into the database
  • EXPORT: jobs that export data to an external database
  • QUERY: jobs that run query statements
  • INSERT: jobs that add new data to tables

schema

No

Array of Map

Definition

If the statement type is DDL, the column name and type of DDL are displayed.

Range

None

rows

No

Array of objects

Definition

When the statement type is DDL and dli.sql.sqlasync.enabled is set to false, the execution results are returned directly. However, only a maximum of 1,000 rows can be returned.

If there are more than 1,000 rows, obtain the results asynchronously. That is, when submitting the job, set xxxx to true, and then obtain the results from the job bucket configured by DLI. The path of the results on the job bucket can be obtained from the result_path in the return value of the ShowSqlJobStatus API. The full data of the results will be automatically exported to the job bucket.

Range

None

job_mode

No

String

Definition

Job execution mode. The options are as follows:

  • async: asynchronous
  • sync: synchronous

Range

None

Example Request

Submit a SQL job. The job execution database and queue are db1 and default, respectively. Then, add the tags workspace=space1 and jobName=name1 for the job.

{
    "currentdb": "db1",
    "sql": "desc table1",
    "queue_name": "default",
    "conf": [
        "dli.sql.shuffle.partitions = 200"
    ],
    "tags": [
            {
              "key": "workspace",
              "value": "space1"
             },
            {
              "key": "jobName",
              "value": "name1"
             }
      ]
}

Example Response

{
  "is_success": true,
  "message": "",
  "job_id": "8ecb0777-9c70-4529-9935-29ea0946039c",
  "job_type": "DDL",
  "job_mode":"sync",
  "schema": [
    {
      "col_name": "string"
    },
    {
      "data_type": "string"
    },
    {
      "comment": "string"
    }
  ],
  "rows": [
    [
      "c1",
      "int",
      null
    ],
    [
      "c2",
      "string",
      null
    ]
  ]
}

Status Codes

Table 6 describes the status codes.

Table 6 Status codes

Status Code

Description

200

Submitted successfully.

400

Request error.

500

Internal service error.

Error Codes

If an error occurs when this API is invoked, the system does not return the result similar to the preceding example, but returns the error code and error information. For details, see Error Codes.