Querying Job Details
Function
This API is used to query details of a job.
Debugging
You can debug this API in API Explorer.
URI
- URI format
- Parameter description
Table 1 URI parameters Parameter
Mandatory
Type
Description
project_id
Yes
String
Project ID, which is used for resource isolation. For details about how to obtain its value, see Obtaining a Project ID.
job_id
Yes
String
Job ID.
Request
None
Response
| Parameter | Mandatory | Type | Description |
|---|---|---|---|
| is_success | No | Boolean | Indicates whether the request is successfully executed. Value true indicates that the request is successfully executed. |
| message | No | String | System prompt. If execution succeeds, the parameter setting may be left blank. |
| job_detail | No | Object | Job details. For details, see Table 3. |
| Parameter | Mandatory | Type | Description |
|---|---|---|---|
| job_id | No | Long | Job ID. |
| name | No | String | Name of the job. Length range: 0 to 57 characters. |
| desc | No | String | Job description. Length range: 0 to 512 characters. |
| job_type | No | String | Job type.
|
| status | No | String | Job status. Available job statuses are as follows:
|
| status_desc | No | String | Description of job status. |
| create_time | No | Long | Time when a job is created. |
| start_time | No | Long | Time when a job is started. |
| user_id | No | String | ID of the user who creates the job. |
| queue_name | No | String | Name of a queue. Length range: 1 to 128 characters. |
| project_id | No | String | ID of the project to which a job belongs. |
| sql_body | No | String | Stream SQL statement. |
| run_mode | No | String | Job running mode. The options are as follows:
|
| job_config | No | Object | Job configurations. Refer to Table 4 for details. |
| main_class | No | String | Main class of a JAR package, for example, org.apache.spark.examples.streaming.JavaQueueStream. |
| entrypoint_args | No | String | Running parameter of a JAR package job. Multiple parameters are separated by spaces. |
| execution_graph | No | String | Job execution plan. |
| update_time | No | Long | Time when a job is updated. |
| feature | No | String | User-defined job feature. Type of the Flink image used by a job.
|
| flink_version | No | String | Flink version. This parameter is valid only when feature is set to basic. You can use this parameter with the feature parameter to specify the version of the DLI basic Flink image used for job running. |
| image | No | String | Custom image. The format is Organization name/Image name:Image version. This parameter is valid only when feature is set to custom. You can use this parameter with the feature parameter to specify a user-defined Flink image for job running. For details about how to use custom images, see the Data Lake Insight User Guide. |
| Parameter | Mandatory | Type | Description |
|---|---|---|---|
| checkpoint_enabled | No | Boolean | Whether to enable the automatic job snapshot function.
The default value is false. |
| checkpoint_mode | No | String | Snapshot mode. There are two options:
The default value is exactly_once. |
| checkpoint_interval | No | Integer | Snapshot interval. The unit is second. The default value is 10. |
| log_enabled | No | Boolean | Whether to enable the log storage function. The default value is false. |
| obs_bucket | No | String | Name of an OBS bucket. |
| smn_topic | No | String | SMN topic name. If a job fails, the system will send a message to users subscribed to the SMN topic. |
| root_id | No | Integer | Parent job ID. |
| edge_group_ids | No | Array of Strings | List of edge computing group IDs. Use commas (,) to separate multiple IDs. |
| manager_cu_number | No | Integer | Number of CUs of the management unit. The default value is 1. |
| cu_number | No | Integer | Number of CUs selected for a job. This parameter is valid only when show_detail is set to true.
The default value is 2. |
| parallel_number | No | Integer | Number of concurrent jobs set by a user. This parameter is valid only when show_detail is set to true.
The default value is 1. |
| restart_when_exception | No | Boolean | Whether to enable the function of restart upon exceptions. |
| idle_state_retention | No | Integer | Expiration time. |
| udf_jar_url | No | String | Name of the package that has been uploaded to the DLI resource management system. The UDF Jar file of the SQL job is uploaded through this parameter. |
| dirty_data_strategy | No | String | Dirty data policy of a job.
|
| entrypoint | No | String | Name of the package that has been uploaded to the DLI resource management system. This parameter is used to customize the JAR file where the job main class is located. |
| dependency_jars | No | Array of Strings | Name of the package that has been uploaded to the DLI resource management system. This parameter is used to customize other dependency packages. |
| dependency_files | No | Array of Strings | Name of the resource package that has been uploaded to the DLI resource management system. This parameter is used to customize dependency files. |
| executor_number | No | Integer | Number of compute nodes in a job. |
| executor_cu_number | No | Integer | Number of CUs in a compute node. |
| resume_checkpoint | No | Boolean | Whether to restore data from the latest checkpoint when the system automatically restarts upon an exception. The default value is false. |
| runtime_config | No | String | Customizes optimization parameters when a Flink job is running. |
Example
- Example request
None
- Example response
{ "is_success": "true", "message": "Querying of the job details succeeds.", "job_detail": { "job_id": 104, "user_id": "011c99a26ae84a1bb963a75e7637d3fd", "queue_name": "flinktest", "project_id": "330e068af1334c9782f4226acc00a2e2", "name": "jptest", "desc": "", "sql_body": "", "run_mode": "exclusive_cluster", "job_type": "flink_jar_job", "job_config": { "checkpoint_enabled": false, "checkpoint_interval": 10, "checkpoint_mode": "exactly_once", "log_enabled": false, "obs_bucket": null, "root_id": -1, "edge_group_ids": null, "graph_editor_enabled": false, "graph_editor_data": "", "manager_cu_number": 1, "executor_number": null, "executor_cu_number": null, "cu_number": 2, "parallel_number": 1, "smn_topic": null, "restart_when_exception": false, "idle_state_retention": 3600, "config_url": null, "udf_jar_url": null, "dirty_data_strategy": null, "entrypoint": "FemaleInfoCollection.jar", "dependency_jars": [ "FemaleInfoCollection.jar", "ObsBatchTest.jar" ], "dependency_files": [ "FemaleInfoCollection.jar", "ReadFromResource" ] }, "main_class": null, "entrypoint_args": null, "execution_graph": null, "status": "job_init", "status_desc": "", "create_time": 1578466221525, "update_time": 1578467395713, "start_time": null } }{ "is_success": "true", "message": "The job information query succeeds.", "job_detail": { "job_type": "flink_opensource_sql_job", "job_config": { "auto_scaling_enable": true, "auto_scaling_interval": 1, "max_parallelism": 12, "real_cu_number": 2 } } }
Status Codes
Table 5 describes the status code.
Error Codes
If an error occurs when this API is invoked, the system does not return the result similar to the preceding example, but returns the error code and error information. For details, see Error Code.
Last Article: Querying the Job List
Next Article: Querying the Job Execution Plan
Did this article solve your problem?
Thank you for your score!Your feedback would help us improve the website.