Creating a SQL Job
Function
This API is used to create a Flink streaming SQL job.
Debugging
You can debug this API in API Explorer.
URI
- URI format
- Parameter description
Table 1 URI parameter Parameter
Mandatory
Type
Description
project_id
Yes
String
Project ID, which is used for resource isolation. For details about how to obtain its value, see Obtaining a Project ID.
Request
| Parameter | Mandatory | Type | Description |
|---|---|---|---|
| name | Yes | String | Name of the job. Length range: 0 to 57 characters. |
| desc | No | String | Job description. Length range: 0 to 512 characters. |
| template_id | No | Integer | Template ID. If both template_id and sql_body are specified, sql_body is used. If template_id is specified but sql_body is not, fill sql_body with the template_id value. |
| queue_name | No | String | Name of a queue. Length range: 1 to 128 characters. |
| sql_body | No | String | Stream SQL statement, which includes at least the following three parts: source, query, and sink. Length range: 1024x1024 characters. |
| run_mode | No | String | Job running mode. The options are as follows:
The default value is shared_cluster. |
| cu_number | No | Integer | Number of CUs selected for a job. The default value is 2. |
| parallel_number | No | Integer | Number of parallel jobs set by a user. The default value is 1. |
| checkpoint_enabled | No | Boolean | Whether to enable the automatic job snapshot function.
|
| checkpoint_mode | No | Integer | Snapshot mode. There are two options:
The default value is 1. |
| checkpoint_interval | No | Integer | Snapshot interval. The unit is second. The default value is 10. |
| obs_bucket | No | String | OBS path where users are authorized to save the snapshot. This parameter is valid only when checkpoint_enabled is set to true. OBS path where users are authorized to save the snapshot. This parameter is valid only when log_enabled is set to true. |
| log_enabled | No | Boolean | Whether to enable the function of uploading job logs to users' OBS buckets. The default value is false. |
| smn_topic | No | String | SMN topic. If a job fails, the system will send a message to users subscribed to the SMN topic. |
| restart_when_exception | No | Boolean | Whether to enable the function of automatically restarting a job upon job exceptions. The default value is false. |
| idle_state_retention | No | Integer | Retention time of the idle state. The unit is hour. The default value is 1. |
| job_type | No | String | Job type. This parameter can be set to flink_sql_job, flink_sql_edge_job, and flink_opensource_sql_job.
|
| edge_group_ids | No | Array of Strings | List of edge computing group IDs. Use commas (,) to separate multiple IDs. |
| dirty_data_strategy | No | String | Dirty data policy of a job.
The default value is 0. |
| udf_jar_url | No | String | Name of the resource package that has been uploaded to the DLI resource management system. The UDF Jar file of the SQL job is specified by this parameter. |
| manager_cu_number | No | Integer | Number of CUs in the JobManager selected for a job. The default value is 1. |
| tm_cus | No | Integer | Number of CUs for each TaskManager. The default value is 1. |
| tm_slot_num | No | Integer | Number of slots in each TaskManager. The default value is (parallel_number*tm_cus)/(cu_number-manager_cu_number). |
| resume_checkpoint | No | Boolean | Whether the abnormal restart is recovered from the checkpoint. |
| resume_max_num | No | Integer | Maximum number of retry times upon exceptions. The unit is times/hour. Value range: -1 or greater than 0. The default value is -1, indicating that the number of times is unlimited. |
| tags | No | Array of Strings | Label of a Flink SQL job. For details, see Table 3. |
| runtime_config | No | String | Customizes optimization parameters when a Flink job is running. |
Response
| Parameter | Mandatory | Type | Description |
|---|---|---|---|
| is_success | No | Boolean | Indicates whether the request is successfully executed. Value true indicates that the request is successfully executed. |
| message | No | String | Message content. |
| job | No | Object | Information about the job status. For details, see Table 5. |
Example
- Example request
{ "name": "myjob", "desc": "This is a job used for counting characters.", "template_id": 100000, "queue_name": "testQueue", "sql_body": "select * from source_table", "run_mode": "exclusive_cluster", "cu_number": 2, "parallel_number": 1, "checkpoint_enabled": false, "checkpoint_mode": "exactly_once", "checkpoint_interval": 0, "obs_bucket": "my_obs_bucket", "log_enabled": false, "restart_when_exception": false, "idle_state_retention": 3600, "job_type": "flink_sql_job", "edge_group_ids": [ "62de1e1c-066e-48a8-a79d-f461a31b2ee1", "2eb00f85-99f2-4144-bcb7-d39ff47f9002" ], "dirty_data_strategy": "0", "udf_jar_url": "group/test.jar" } - Example response
{ "is_success": "true", "message": "A DLI job is created successfully.", "job": { "job_id": 148, "status_name": "job_init", "status_desc": "" } }
Status Codes
Table 6 describes status codes.
Error Codes
If an error occurs when this API is invoked, the system does not return the result similar to the preceding example, but returns the error code and error information. For details, see Error Code.
Last Article: Granting OBS Permissions to DLI
Next Article: Updating a SQL Job
Did this article solve your problem?
Thank you for your score!Your feedback would help us improve the website.