Updated on 2024-11-29 GMT+08:00

Querying a List of Jobs

Function

This API is used to query the job list in a specified MRS cluster.

URI

  • Format

    GET /v2/{project_id}/clusters/{cluster_id}/job-executions

  • Parameter description
    Table 1 URI parameters

    Parameter

    Mandatory

    Type

    Description

    project_id

    Yes

    String

    Explanation

    Project ID. For details about how to obtain the project ID, see Obtaining a Project ID.

    Constraints

    N/A

    Value range

    The value must consist of 1 to 64 characters. Only letters and digits are allowed.

    Default value

    N/A

    cluster_id

    Yes

    String

    Explanation

    Cluster ID. If this parameter is specified, the latest metadata of the cluster that has been patched will be obtained. For details about how to obtain the cluster ID, see Obtaining a Cluster ID.

    Constraints

    N/A

    Value range

    The value can contain 1 to 64 characters, including only letters, digits, underscores (_), and hyphens (-).

    Default value

    N/A

    Table 2 Query parameters

    Parameter

    Mandatory

    Type

    Description

    job_name

    No

    String

    Explanation

    Job name.

    Constraints

    N/A

    Value range

    The value can contain 1 to 128 characters, including only letters, digits, underscores (_), and hyphens (-).

    Default value

    N/A

    job_id

    No

    String

    Explanation

    Job ID.

    Constraints

    N/A

    Value range

    The value can contain 1 to 64 characters, including only letters, digits, and hyphens (-).

    Default value

    N/A

    user

    No

    String

    Explanation

    Username.

    Constraints

    N/A

    Value range

    The value can contain 1 to 32 characters, including only letters, digits, hyphens (-), underscores (_), and periods (.), and cannot start with a digit.

    Default value

    N/A

    job_type

    No

    String

    Explanation

    Job type.

    Constraints

    N/A

    Value range

    • MapReduce
    • SparkSubmit
    • SparkSubmit: Select this value when you call an API to query SparkPython jobs.
    • HiveScript
    • HiveSql
    • DistCp: imports and exports data.
    • SparkScript
    • SparkSql
    • Flink

    Default value

    N/A

    job_state

    No

    String

    Explanation

    The job execution status.

    Constraints

    N/A

    Value range

    • FAILED: indicates that the job fails to be executed.
    • KILLED: indicates that the job is terminated.
    • New: indicates that the job is created.
    • NEW_SAVING: indicates that the job has been created and is being saved.
    • SUBMITTED: indicates that the job is submitted.
    • ACCEPTED: indicates that the job is accepted.
    • RUNNING: indicates that the job is running.
    • FINISHED: indicates that the job is completed.

    Default value

    N/A

    job_result

    No

    String

    Explanation

    Job execution result.

    Constraints

    N/A

    Value range

    • FAILED: indicates that the job fails to be executed.
    • KILLED: indicates that the job is manually terminated during execution.
    • UNDEFINED: indicates that the job is being executed.
    • SUCCEEDED: indicates that the job has been successfully executed.

    Default value

    N/A

    queue

    No

    String

    Explanation

    Resource queue type of a job.

    Constraints

    N/A

    Value range

    The value can contain 1 to 64 characters, including only letters, digits, underscores (_), and hyphens (-).

    Default value

    N/A

    limit

    No

    String

    Explanation

    Number of records displayed on each page in the returned result.

    Constraints

    N/A

    Value range

    N/A

    Default value

    10

    offset

    No

    String

    Explanation

    Offset from which the job list starts to be queried.

    Constraints

    N/A

    Value range

    N/A

    Default value

    1

    sort_by

    No

    String

    Explanation

    Sorting method of the returned result.

    Constraints

    N/A

    Value range

    • asc: indicates that the returned results are sorted in ascending order.
    • desc: indicates that the returned results are sorted in descending order.

    Default value

    desc

    submitted_time_begin

    No

    Long

    Explanation

    UTC timestamp after which a job is submitted, in milliseconds, for example, 1562032041362.

    Constraints

    N/A

    Value range

    N/A

    Default value

    N/A

    submitted_time_end

    No

    Long

    Explanation

    UTC timestamp before which a job is submitted, in milliseconds, for example, 1562032041362.

    Constraints

    N/A

    Value range

    N/A

    Default value

    N/A

Request Parameters

None

Response Parameters

Status code: 202

Table 3 Response body parameters

Parameter

Type

Description

total_record

Integer

Explanation

Total number of records.

Value range

N/A

job_list

Array of JobQueryBean objects

Explanation

The job list. For details about the parameters, see Table 4.

Table 4 JobQueryBean

Parameter

Type

Description

job_id

String

Explanation

Job ID.

Value range

N/A

user

String

Explanation

Name of the user who submits the job.

Value range

N/A

job_name

String

Explanation

Job name.

Value range

N/A

job_result

String

Explanation

Final result of a job.

Value range

  • FAILED: indicates that the job fails to be executed.
  • KILLED: indicates that the job is manually terminated during execution.
  • UNDEFINED: indicates that the job is being executed.
  • SUCCEEDED: indicates that the job has been successfully executed.

job_state

String

Explanation

Job execution status.

Value range

  • FAILED: indicates that the job fails to be executed.
  • KILLED: indicates that the job is terminated.
  • New: indicates that the job is created.
  • NEW_SAVING: indicates that the job has been created and is being saved.
  • SUBMITTED: indicates that the job is submitted.
  • ACCEPTED: indicates that the job is accepted.
  • RUNNING: indicates that the job is running.
  • FINISHED: indicates that the job is completed.

job_progress

Float

Explanation

Job execution progress.

Value range

N/A

job_type

String

Explanation

Job type.

Value range

  • MapReduce
  • SparkSubmit: Select this value when you call an API to query SparkPython jobs.
  • HiveScript
  • HiveSql
  • DistCp: imports and exports data.
  • SparkScript
  • SparkSql
  • Flink

started_time

Long

Explanation

Time when a job starts to execute. Unit: milliseconds

Value range

N/A

submitted_time

Long

Explanation

Time when a job is submitted. Unit: milliseconds

Value range

N/A

finished_time

Long

Explanation

Time when a job was completed. Unit: milliseconds

Value range

N/A

elapsed_time

Long

Explanation

Running duration of a job. Unit: milliseconds

Value range

N/A

arguments

String

Explanation

Running parameter.

Value range

N/A

properties

String

Explanation

Configuration parameter, which is used to configure -d parameters.

Value range

N/A

launcher_id

String

Explanation

Actual job ID.

Value range

N/A

app_id

String

Explanation

Actual job ID.

Value range

N/A

tracking_url

String

Explanation

The URL for accessing logs. Currently, only SparkSubmit jobs support this parameter. This parameter accesses the Yarn Web UI via the EIP bound to the cluster. If the EIP is unbound from the cluster on the VPC console, the MRS service data is not updated in a timely manner and the access fails. In this case, you can bind the EIP to the cluster again to rectify the fault.

Value range

N/A

queue

String

Explanation

Resource queue type of a job.

Value range

N/A

Status code: 500

Table 5 Response body parameters

Parameter

Type

Description

error_code

String

Explanation

Error code.

Value range

N/A

error_msg

String

Explanation

Error message.

Value range

N/A

Example Response

Status code: 202

Querying a list of jobs is successful.

{
  "total_record" : 2,
  "job_list" : [ {
    "job_id" : "981374c1-85da-44ee-be32-edfb4fba776c",
    "user" : "xxxx",
    "job_name" : "SparkSubmitTset",
    "job_result" : "UNDEFINED",
    "job_state" : "ACCEPTED",
    "job_progress" : 0,
    "job_type" : "SparkSubmit",
    "started_time" : 0,
    "submitted_time" : 1564714763119,
    "finished_time" : 0,
    "elapsed_time" : 0,
    "queue" : "default",
    "arguments" : "[--class, --driver-memory, --executor-cores, --master, yarn-cluster, s3a://obs-test/hadoop-mapreduce-examples-3.1.1.jar, dddd]",
    "launcher_id" : "application_1564622673393_0613",
    "properties" : { }
  }, {
    "job_id" : "c54c8aa0-c277-4f83-8acc-521d85cfa32b",
    "user" : "xxxx",
    "job_name" : "SparkSubmitTset2",
    "job_result" : "UNDEFINED",
    "job_state" : "ACCEPTED",
    "job_progress" : 0,
    "job_type" : "SparkSubmit",
    "started_time" : 0,
    "submitted_time" : 1564714020099,
    "finished_time" : 0,
    "elapsed_time" : 0,
    "queue" : "default",
    "arguments" : "[--conf, yujjsjhe, --driver-memory, yueujdjjd, --master,\nyarn-cluster,\ns3a://obs-test/hadoop-mapreduce-examples-3.1.1.jar]",
    "launcher_id" : "application_1564622673393_0611",
    "properties" : { }
  } ]
}

Status code: 500

Failed to query a list of jobs.

{
  "error_msg": "Failed to query the job list."
  "error_code" : "0166"
}

Status Codes

See Status Codes.

Error Codes

See Error Codes.