Viewing Job Details
Function
This API is used to view job details.
URI
- Parameter description
Parameter
Mandatory
Type
Description
project_id
Yes
String
Project ID. For details about how to obtain a project ID, see Project ID and Account ID.
job_name
Yes
String
Job name.
version
No
Integer
Job version number. If this parameter is specified, jobs of the specified version are queried. If this parameter is not specified, jobs of the latest version are queried.
Request Parameters
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
workspace |
No |
String |
Workspace ID.
|
Response Parameters
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
name |
Yes |
String |
Job name. The name contains a maximum of 128 characters, including only letters, numbers, hyphens (-), underscores (_), and periods (.). The job name must be unique. |
nodes |
Yes |
List<Node> |
Node definition. For details, see Table 3. |
schedule |
Yes |
Schedule data structure |
Scheduling configuration. For details, see Table 4. |
params |
No |
List<Param> |
Job parameter definition. For details, see Table 5. |
directory |
No |
String |
Path of a job in the directory tree. If the directory of the path does not exist during job creation, a directory is automatically created in the root directory /, for example, /dir/a/. |
processType |
Yes |
String |
Job type.
|
id |
No |
Long |
Job ID This parameter is required for querying jobs. |
createTime |
No |
Long |
Job creation time |
downstreamJobs |
No |
String |
Name of the downstream job that depends on the current job |
lastUpdateUser |
No |
String |
User who last updated the job |
logPath |
No |
String |
OBS path for storing job run logs |
basicConfig |
No |
BasicConfig data structure |
Basic job information. For details, see Table 28. |
description |
No |
String |
Job description |
cleanoverdueDays |
No |
integer |
Maximum timeout duration of the job |
cleanWaitingJob |
No |
String |
Whether to clear waiting jobs |
emptyRunningJob |
No |
String |
Whether to enable dry run |
version |
No |
String |
Job version |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
name |
Yes |
String |
Node name. The name contains a maximum of 128 characters, including only letters, numbers, hyphens (-), underscores (_), and periods (.). Names of the nodes in a job must be unique. |
type |
Yes |
String |
Node type. The options are as follows:
|
location |
Yes |
Location data structure |
Location of a node on the job canvas. For details, see Table 6. |
preNodeName |
No |
List<String> |
Name of the previous node on which the current node depends. |
conditions |
No |
List<Condition> |
Node execution condition. Whether the node is executed or not depends on the calculation result of the EL expression saved in the expression field of condition. For details, see Table 7. |
properties |
Yes |
List<Property> |
Node properties. For details, see Table 13. Each type of node has its own property definition.
|
pollingInterval |
No |
Int |
Interval at which node running results are checked. Unit: second; value range: 1 to 60 Default value: 10 |
execTimeOutRetry |
No |
String |
Whether to retry a node upon timeout. The default value is false. |
maxExecutionTime |
No |
Int |
Maximum execution time of a node. If a node is not executed within the maximum execution time, the node is set to the failed state. Unit: minute; value range: 5 to 1440 Default value: 60 |
retryTimes |
No |
Int |
Number of the node retries. The value ranges from 0 to 5. 0 indicates no retry. Default value: 0 |
retryInterval |
No |
Int |
Interval at which a retry is performed upon a failure. The value ranges from 5 to 120. Unit: second Default value: 120 |
failPolicy |
No |
String |
Node failure policy.
|
eventTrigger |
No |
Event data structure |
Event trigger for the real-time job node. For details, see Table 10. |
cronTrigger |
No |
Cron data structure |
Cron trigger for the real-time job node. For details, see Table 8. |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
type |
Yes |
String |
Scheduling type.
|
cron |
No |
Data structure |
When type is set to CRON, configure the scheduling frequency and start time. For details, see Table 9. |
event |
No |
Data structure |
When type is set to EVENT, configure information such as the event source. For details, see Table 10. |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
name |
Yes |
String |
Parameter name. It contains only letters, digits, hyphens (-), and underscores (_), and can contain no more than 64 characters. |
value |
Yes |
String |
Value of the parameter. It cannot exceed 1,024 characters. |
type |
No |
String |
Type |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
x |
Yes |
Int |
Position of the node on the horizontal axis of the job canvas. |
y |
Yes |
Int |
Position of the node on the vertical axis of the job canvas. |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
preNodeName |
Yes |
String |
Name of the previous node on which the current node depends. |
expression |
Yes |
String |
EL expression. If the calculation result of the EL expression is true, this node is executed. |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
startTime |
Yes |
String |
Scheduling start time in the format of yyyy-MM-dd'T'HH:mm:ssZ, which is an ISO 8601 time format. For example, 2018-10-22T23:59:59+08, which indicates that a job starts to be scheduled at 23:59:59 on October 22nd, 2018. |
endTime |
No |
String |
Scheduling end time in the format of yyyy-MM-dd'T'HH:mm:ssZ, which is an ISO 8601 time format. For example, 2018-10-22T23:59:59+08, which indicates that a job stops to be scheduled at 23:59:59 on October 22nd, 2018. If the end time is not set, the job will continuously be executed based on the scheduling period. |
expression |
Yes |
String |
Cron expression in the format of <second><minute><hour><day><month><week>. For details about the value input in each field, see Table 11. |
expressionTimeZone |
No |
String |
Time zone corresponding to the Cron expression, for example, GMT+8. Default value: time zone where DataArts Studio is located |
period |
Yes |
String |
Job execution interval consisting of a time and time unit Example: 1 hours, 1 days, 1 weeks, 1 months The value must match the value of expression. |
dependPrePeriod |
No |
Boolean |
Indicates whether to depend on the execution result of the current job's dependent job in the previous scheduling period. Default value: false |
dependJob |
No |
DependJob data structure |
Dependent job configuration. For details, see Table 12 DependJob data structure description. |
concurrent |
No |
Integer |
Number of concurrent executions allowed |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
startTime |
Yes |
String |
Scheduling start time in the format of yyyy-MM-dd'T'HH:mm:ssZ, which is an ISO 8601 time format. For example, 2018-10-22T23:59:59+08, which indicates that a job starts to be scheduled at 23:59:59 on October 22nd, 2018. |
endTime |
No |
String |
Scheduling end time in the format of yyyy-MM-dd'T'HH:mm:ssZ, which is an ISO 8601 time format. For example, 2018-10-22T23:59:59+08, which indicates that a job stops to be scheduled at 23:59:59 on October 22nd, 2018. If the end time is not set, the job will continuously be executed based on the scheduling period. |
expression |
Yes |
String |
Cron expression in the format of <second><minute><hour><day><month><week>. For details about the value input in each field, see Table 11. |
expressionTimeZone |
No |
String |
Time zone corresponding to the Cron expression, for example, GMT+8. Default value: time zone where DataArts Studio is located |
dependPrePeriod |
No |
Boolean |
Indicates whether to depend on the execution result of the current job's dependent job in the previous scheduling period. Default value: false |
dependJobs |
No |
DependJob data structure |
Dependent job configuration. For details, see Table 12 DependJob data structure description. |
intervalType |
No |
String |
Scheduling interval type. The following options are available:
|
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
eventType |
Yes |
String |
Select the corresponding connection name and topic. When a new Kafka message is received, the job is triggered. Set this parameter to KAFKA. Event type. Currently, only newly reported data events from the DIS stream can be monitored. Each time a data record is reported, the job runs once. This parameter is set to DIS. Select the OBS path to be listened to. If new files exist in the path, scheduling is triggered. The path name can be referenced using variable Job.trigger.obsNewFiles. The prerequisite is that DIS notifications have been configured for the OBS path. Set this parameter to OBS. |
channel |
Yes |
String |
DIS stream name. Perform the following operations to obtain the stream name:
|
failPolicy |
No |
String |
Job failure policy.
Default value: SUSPEND |
concurrent |
No |
int |
Number of the concurrently scheduled jobs. Value range: 1 to 128 Default value: 1 |
readPolicy |
No |
String |
Access policy.
Default value: LAST |
Field |
Value Range |
Allowed Special Character |
Description |
---|---|---|---|
Second |
0-59 |
, - * / |
In the current version, only 0 is allowed. |
Minute |
0-59 |
, - * / |
None |
Hour |
0-23 |
, - * / |
None |
Day |
1-31 |
, - * ? / L W C |
None |
Month |
1-12 |
, - * / |
In the current version, only * is allowed. |
Week |
1-7 |
, - * ? / L C # |
Starting from Sunday. |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
jobs |
Yes |
List<String> |
A list of dependent jobs. Only the existing jobs can be depended on. |
dependPeriod |
No |
String |
Dependency period.
Default value: SAME_PERIOD |
dependFailPolicy |
No |
String |
Dependency job failure policy.
Default value: FAIL |
sameWorkSpaceJobs |
No |
DependWorkspaceJob data structure |
List of dependent jobs in the current workspace. For details, see Table 31. |
otherWorkSpaceJobs |
No |
DependWorkspaceJob data structure |
List of dependent jobs in other workspaces. For details, see Table 31. |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
name |
Yes |
String |
Property name |
value |
Yes |
String |
Property value |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
scriptName |
Yes |
String |
Script name |
database |
No |
String |
Database name Database in the MRS Hive. The default value is default. |
connectionName |
No |
String |
Name of a connection |
scriptArgs |
No |
String |
Script parameter in format of key and value. Multiple parameters are separated by newlines (\n), for example, key1=value1\nkey2=value2. |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
scriptName |
Yes |
String |
Script name |
database |
No |
String |
Database name Database in the MRS Spark SQL. The default value is default. |
connectionName |
No |
String |
Name of a connection |
scriptArgs |
No |
String |
Script parameter in format of key and value. Multiple parameters are separated by newlines (\n), for example, key1=value1\nkey2=value2. |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
scriptName |
Yes |
String |
Script name |
database |
No |
String |
Database name Database in DWS. The default value is postgres. |
connectionName |
No |
String |
Name of a connection |
scriptArgs |
No |
String |
Script parameter in format of key and value. Multiple parameters are separated by newlines (\n), for example, key1=value1\nkey2=value2. |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
scriptName |
Yes |
String |
Script name |
database |
No |
String |
Database name Database in DLI. |
connectionName |
No |
String |
Name of a connection |
scriptArgs |
No |
String |
Script parameter in format of key and value. Multiple parameters are separated by newlines (\n), for example, key1=value1\nkey2=value2. |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
scriptName |
Yes |
String |
Script name |
connectionName |
Yes |
String |
Name of a connection |
arguments |
No |
String |
Shell script parameter. |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
clusterName |
Yes |
String |
Cluster name You can obtain the cluster name from the CDM cluster list on the DataArts Migration page of the DataArts Studio console. |
jobName |
Yes |
String |
Job name To obtain the job name, access the DataArts Studio console, choose DataArts Migration, click a cluster name on the Cluster Management page, and click Job Management on the displayed page. |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
streamName |
Yes |
String |
DIS stream name. Perform the following operations to obtain the stream name:
|
destinationType |
Yes |
String |
Dump target
|
duplicatePolicy |
Yes |
String |
Duplicate name policy.
|
configuration |
Yes |
Data structure |
Dump configuration. For details, see the descriptions of the obs_destination_descriptor and cloudtable_destination_descriptor parameters in . |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
namespace |
No |
String |
namespaces Default value: default |
action |
Yes |
String |
Action type
|
table |
No |
String |
Table name |
columnFamily |
No |
String |
Column family |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
action |
Yes |
String |
Action type
|
path |
Yes |
String |
OBS path |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
url |
Yes |
String |
URL address URL of the cloud service. |
method |
Yes |
String |
HTTP method
|
headers |
No |
String |
HTTP message header in the format of <message header name>=<value>. Multiple message headers are separated by newlines. |
body |
No |
String |
Message body |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
topic |
Yes |
String |
SMN topic URN. Perform the following operations to obtain an SMN topic URN:
You can obtain the SMN topic URN in the topic list. |
subject |
Yes |
String |
Message title, which is used as the subject of an email sent to a subscriber. |
messageType |
Yes |
String |
Message type
|
message |
Yes |
String |
Message to be sent. |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
clusterName |
Yes |
String |
MRS cluster name. Perform the following operations to obtain the MRS cluster name:
You can obtain the cluster name from the active clusters. |
jobName |
Yes |
String |
MRS job name. The job name is user-defined. |
resourcePath |
Yes |
String |
OBS resource path of the custom Spark JAR package |
parameters |
Yes |
String |
Custom parameters of the Spark JAR package You can specify parameters for a custom JAR package. |
input |
No |
String |
Input path Input data path of the MRS Spark job. The path can be an HDFS or OBS path. |
output |
No |
String |
Output path Output data path of the MRS Spark job. The path can be an HDFS or OBS path. |
programParameter |
No |
String |
Program parameter Multiple key-value pairs are allowed and separated by vertical bars (|). |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
clusterName |
Yes |
String |
MRS cluster name. Perform the following operations to obtain the MRS cluster name:
You can obtain the cluster name from the active clusters. |
jobName |
Yes |
String |
MRS job name. The job name is user-defined. |
resourcePath |
Yes |
String |
Resource path |
parameters |
Yes |
String |
Job parameters |
input |
Yes |
String |
Input path Input data path of the MapReduce job. The path can be an HDFS or OBS path. |
output |
Yes |
String |
Output path Output data path of the MapReduce job. The path can be an HDFS or OBS path. |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
clusterName |
Yes |
String |
DLI queue name Perform the following operations to obtain the DLI queue name:
You can obtain the queue name from the queue management list. |
jobName |
Yes |
String |
DLI job name. Perform the following operations to obtain the job name:
You can obtain the job name from the job management list. |
resourceType |
No |
String |
Type of the running resource of the DLI job . This parameter is optional. 1. OBS path: OBS 2. DLI package: DLIResources |
jobClass |
No |
String |
Main class name. When the application type is .jar, the main class name cannot be empty. |
resourcePath |
Yes |
String |
JAR package resource path. |
jarArgs |
No |
String |
Main-class entry parameter. |
sparkConfig |
No |
String |
Running parameter of the Spark job. |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
owner |
No |
String |
Job owner. The length cannot exceed 128 characters. |
isIgnoreWaiting |
No |
integer |
Whether to exclude waiting time from the instance timeout duration. The value can be 0 or 1. 0 indicates that the waiting time is not excluded from the instance timeout duration, and 1 indicates that the waiting time is excluded from the instance timeout duration |
priority |
No |
int |
Job priority. The value ranges from 0 to 2. The default value is 0. 0 indicates a top priority, 1 indicates a medium priority, and 2 indicates a low priority. |
executeUser |
No |
String |
Job execution user. The value must be an existing username. |
instanceTimeout |
No |
int |
Instance timeout interval. The unit is minute. The value ranges from 5 to 1440. The default value is 60. |
customFields |
No |
Map<String,String> |
User-defined field. The length cannot exceed 2048 characters. |
agency |
No |
String |
Agency name |
tags |
No |
List<String> |
Job tags |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
clusterName |
Yes |
String |
MRS cluster name. Perform the following operations to obtain the MRS cluster name:
You can obtain the cluster name from the active clusters. |
jobName |
Yes |
String |
MRS job name. The job name is user-defined. |
flinkJobType |
Yes |
String |
Flink job type, which can be FLink SQL or Flink JAR |
flinkJobProcessType |
Yes |
String |
Flink job processing mode, which can be batch or stream |
scriptName |
No |
String |
SQL script associated with the Flink SQL job |
resourcePath |
No |
String |
OBS resource path of the custom Flink JAR package |
input |
No |
String |
Input path Input data path of the MRS Flink job. The path can be an HDFS or OBS path. |
output |
No |
String |
Output path Output data path of the MRS Flink job. The path can be an HDFS or OBS path. |
programParameter |
No |
String |
Program parameter Multiple key-value pairs are allowed and separated by vertical bars (|). |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
clusterName |
Yes |
String |
MRS cluster name. Perform the following operations to obtain the MRS cluster name:
You can obtain the cluster name from the active clusters. |
jobName |
Yes |
String |
MRS job name. The job name is user-defined. |
statementOrScript |
Yes |
String |
Whether to use an SQL statement for the node or associate an SQL script with the node |
scriptName |
No |
String |
SQL script to be associated with the node |
statement |
No |
String |
Custom content of the SQL statement |
Data Warehouse |
Yes |
String |
Data connection required by HetuEngine |
Schema |
Yes |
String |
Name of the schema to be accessed through HetuEngine |
Database |
Yes |
String |
Name of the database to be accessed through HetuEngine |
Queue |
No |
String |
Name of the resource queue required by HetuEngine |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
workSpace |
Yes |
String |
Workspace name |
workSpaceId |
No |
String |
Workspace ID |
jobName |
Yes |
String |
Job name |
dependOnLastPeriod |
No |
Boolean |
Whether to depend on the latest instance of an upstream job |
dependType |
No |
String |
Dependency rule |
dependOnLastNPeriodValue |
No |
Integer |
Number of previous periods on which the job depends |
Example Request
GET /v1/b384b9e9ab9b4ee8994c8633aabc9505/jobs/myJob
Example Response
If dependencies=true is contained in the parameters for calling the API used to query job details, the downstream jobs (first layer) that depend on the current job are returned.
GET /v1/b384b9e9ab9b4ee8994c8633aabc9505/jobs/xxx_1?dependencies=true xxx_1 indicates the job name. The response message returns the downstream jobs that depends on the current job, as shown in the downstreamJobs field in the following example response message:
- Success response
{ "basicConfig": { "agency": "", "customFields": {}, "executeUser": "", "instanceTimeout": 0, "owner": "test_user", "priority": 0, "tags": {} }, "cleanoverdueDays": "60", "cleanWaitingJob": "cleanup", "createTime": "1695642745000", "downstreamJobs": [ { "jobId": "163240", "jobName": "xxx_2", "workSpace": "workspace_name" } ], "description": "", "emptyRunningJob": "0", "id": 608627, "lastUpdateUser": "test_user", "logPath": "obs://dlf-test-log", "name": "myJob", "nodes": [ { "failPolicy": "FAIL_CHILD", "location": { "x": "-45.5", "y": "-134.5" }, "maxExecutionTime": 360, "name": "MRS_Hive_SQL", "pollingInterval": 20, "preNodeName": [], "properties": [ { "name": "scriptName", "value": "test_hive_sql" }, { "name": "connectionName", "value": "mrs_hive_test" }, { "name": "database", "value": "default" }, { "name": "scriptArgs", "value": "test_var=111" } ], "retryInterval": 120, "retryTimes": 0, "type": "HiveSQL" } ], "processType": "BATCH", "schedule": { "type": "CRON" } }
- Failure response
{ "error_code":"DLF.0100", "error_msg":"The job does not exists." }
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot