Querying the exe Object List of Jobs (Deprecated)
Function
This API is used to query the exe object list of all jobs. This API is incompatible with Sahara.
URI
- Format
- Parameter description
Table 1 URI parameter Parameter
Mandatory
Description
project_id
Yes
The project ID. For details about how to obtain the project ID, see Obtaining a Project ID.
Request Parameters
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
cluster_id |
Yes |
String |
Cluster ID |
id |
No |
String |
Job execution object ID |
page_size |
No |
Integer |
Maximum number of jobs displayed on a page Value range: 1 to 100 |
current_page |
No |
Integer |
Current page number |
job_name |
No |
String |
Job name |
state |
No |
Integer |
Job status code
|
Response Parameters
Parameter |
Type |
Description |
---|---|---|
totalRecord |
Integer |
The total number of jobs in the job list. |
job_executions |
Array |
Job list parameter For details, see Table 4. |
Parameter |
Type |
Description |
---|---|---|
id |
String |
Job ID |
create_at |
Integer |
Creation time, which is a 13-bit timestamp. |
update_at |
Integer |
Update time, which is a 13-bit timestamp. |
tenant_id |
String |
Project ID. For details about how to obtain the project ID, see Obtaining a Project ID. |
job_id |
String |
Job ID of the YARN |
job_name |
String |
Job name |
start_time |
Integer |
Start time of job execution, which is a 13-bit timestamp. |
end_time |
Integer |
End time of job execution, which is a 13-bit timestamp. |
cluster_id |
String |
Cluster ID of a job |
group_id |
String |
Group ID of a job |
jar_path |
String |
Path of the .jar file or .sql file for program execution |
input |
String |
Address for inputting data |
output |
String |
Address for outputting data |
job_log |
String |
Address for storing job logs |
job_type |
Integer |
Job type code
|
file_action |
String |
Data import and export |
arguments |
String |
Key parameter for program execution. The parameter is specified by the function of the user's internal program. MRS is only responsible for loading the parameter. This parameter can be empty. |
hql |
String |
HiveQL statement |
job_state |
Integer |
Job status code
|
job_final_status |
Integer |
Final job status.
|
hive_script_path |
String |
Address of the Hive script |
create_by |
String |
User ID for creating jobs |
finished_step |
Integer |
Number of completed steps |
job_main_id |
String |
Main ID of a job |
job_step_id |
String |
Step ID of a job |
postpone_at |
Integer |
Delay time, which is a 13-bit timestamp. |
step_name |
String |
Step name of a job |
step_num |
Integer |
Number of steps |
task_num |
Integer |
Number of tasks |
update_by |
String |
User ID for updating jobs |
spend_time |
Integer |
Duration of job execution (unit: s) |
step_seq |
Integer |
Step sequence of a job |
progress |
String |
Job execution progress |
Example
- Example request
GET/v1.1/{project_id}/job-exes?page_size=10¤t_page=1&state=3&job_name=myfirstjob&clusterId=20ca8601-72a2-4570-b788-2a20fec81a95
- Example response
{ "totalRecord": 14, "job_executions": [ { "id": "669476bd-89d2-45aa-8f1a-872d16de377e", "create_at": 1484641003707, "update_at": 1484641003707, "tenant_id": "3f99e3319a8943ceb15c584f3325d064", "job_id": "", "job_name": "myfirstjob", "start_time": 1484641003707, "end_time": null, "cluster_id": "2b460e01-3351-4170-b0a7-57b9dd5ffef3", "group_id": "669476bd-89d2-45aa-8f1a-872d16de377e", "jar_path": "s3a://jp-test1/program/hadoop-mapreduce-examples-2.4.1.jar", "input": "s3a://jp-test1/input/", "output": "s3a://jp-test1/output/", "job_log": "s3a://jp-test1/joblogs/", "job_type": 1, "file_action": "", "arguments": "wordcount", "hql": "", "job_state": 2, "job_final_status": 1, "hive_script_path": null, "create_by": "3f99e3319a8943ceb15c584f3325d064", "finished_step": 0, "job_main_id": "", "job_step_id": "", "postpone_at": 1484641003174, "step_name": "", "step_num": 0, "task_num": 0, "update_by": "3f99e3319a8943ceb15c584f3325d064", "spend_time": null, "step_seq": 222, "progress": "first progress" } ] }
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.