Help Center/MapReduce Service/API Reference/API V2/Job Management APIs/Adding and Executing a Job - CreateExecuteJob
Updated on 2026-01-04 GMT+08:00

Adding and Executing a Job - CreateExecuteJob

Function

This API is used to add and submit a job in an MRS cluster.

  • If you want to use the OBS encryption function, follow instructions in Using OBS to Encrypt Data for Running Jobs to configure related information and call an API to run a job.
  • On the Dashboard tab page of the cluster details page, click Click to synchronize on the right side of IAM User Sync to synchronize IAM users. Then submit a job through this API.

Constraints

None

Debugging

You can debug this API in API Explorer. Automatic authentication is supported. API Explorer can automatically generate sample SDK code and supports sample SDK code debugging.

Authorization Information

Each account has all the permissions required to call all APIs, but IAM users must be assigned the required permissions.

  • If you are using role/policy-based authorization, see Permissions Policies and Supported Actions for details on the required permissions.
  • If you are using identity policy-based authorization, the following identity policy-based permissions are required.

    Action

    Access Level

    Resource Type (*: required)

    Condition Key

    Alias

    Dependency

    mrs:cluster:createJob

    Write

    cluster *

    • g:ResourceTag/<tag-key>

    • g:EnterpriseProjectId

    • mrs:job:submit

    -

URI

POST /v2/{project_id}/clusters/{cluster_id}/job-executions
Table 1 URI parameters

Parameter

Mandatory

Type

Description

project_id

Yes

String

Explanation

Project ID. For details about how to obtain the project ID, see Obtaining a Project ID.

Constraints

N/A

Value range

The value must consist of 1 to 64 characters. Only letters and digits are allowed.

Default value

N/A

cluster_id

Yes

String

Explanation

Cluster ID. For details on how to obtain the cluster ID, see Obtaining a Cluster ID.

Constraints

N/A

Value range

The value can contain 1 to 64 characters, including only letters, digits, underscores (_), and hyphens (-).

Default value

N/A

Request Parameters

Table 2 Request parameters

Parameter

Mandatory

Type

Description

job_type

Yes

String

Explanation

The job type.

Constraints

N/A

Value range

  • MapReduce: provides a distributed data processing model and execution environment capable of rapidly handling large-scale data in parallel. With MRS, you can submit MapReduce JAR programs.

  • SparkSubmit: allows you to submit Spark JAR and Spark Python programs and run Spark applications to compute and process user data.

  • SparkPython: SparkPython jobs are converted to SparkSubmit jobs for submission. On the MRS console, the job type is displayed as SparkSubmit. When calling an API to query the job list, select SparkSubmit.

  • HiveScript: Hive is an open-source data warehouse that runs on Hadoop. With MRS, you can submit HiveScript scripts for execution.

  • HiveSql: Hive is an open-source data warehouse that runs on Hadoop. With MRS, you can directly execute Hive SQL statements.

  • DistCp: a Hadoop tool used to efficiently import and export data between distributed file systems (such as HDFS).

  • SparkScript: allows you to submit SparkScript scripts and batch execute Spark SQL statements.

  • Spark SQL: allows you to use SQL-like statements provided by Spark to query and analyze user data in real time.

  • Flink: a distributed big data processing engine that can perform stateful computations over both unbounded and bounded data streams.

Default value

N/A

job_name

Yes

String

Explanation

Job name.

Constraints

N/A

Value range

The value can contain 1 to 64 characters, including only letters, digits, underscores (_), and hyphens (-).

Identical job names are allowed but not recommended.

Default value

N/A

arguments

No

Array of strings

Explanation

Key parameter for program execution. The parameter is specified by the function of the user's program. MRS is only responsible for loading the parameter.

Constraints

The value can contain a maximum of 150,000 characters. Special characters (;|&>'<$!\\) are not allowed. This parameter can be left blank.

NOTE:
  • When entering parameters with sensitive information, such as a login password, be aware that these values may appear in job details or logs. Exercise caution when using such inputs.
  • For MRS 1.9.2 or later, a file path on OBS can start with obs://. To use this format to submit HiveScript or HiveSQL jobs, choose Components > Hive > Service Configuration on the cluster details page. Set Type to All, and search for core.site.customized.configs. Add the endpoint configuration item fs.obs.endpoint of OBS and enter the endpoint corresponding to OBS in Value. For details, see Endpoints.
  • For MRS 3.x or later, a file path on OBS can start with obs://. To use this format to submit HiveScript or HiveSQL jobs, choose Components > Hive > Service Configuration on the cluster details page. Set Basic to All, and search for core.site.customized.configs. Add the endpoint configuration item fs.obs.endpoint of OBS and enter the endpoint corresponding to OBS in Value. For details, see Endpoints.

Value range

N/A

Default value

N/A

properties

No

Map<String,String>

Explanation

Program system parameter.

Constraints

The parameter contains a maximum of 2,048 characters, excluding special characters such as ><|'`&!\, and can be left blank.

Value range

N/A

Default value

N/A

Response Parameters

Status code: 200

Table 3 Response body parameter

Parameter

Type

Description

job_submit_result

JobSubmitResult object

Explanation

The job execution result. For details about the parameters, see Table 4.

Constraints

N/A

Value range

N/A

Default value

N/A

Table 4 JobSubmitResult parameters

Parameter

Type

Description

job_id

String

Explanation

Job ID

Constraints

N/A

Value range

N/A

Default value

N/A

state

String

Explanation

Job submission status.

Constraints

N/A

Value range

  • COMPLETE: The job is submitted.
  • FAILED: Failed to submit the job.

Default value

N/A

Status code: 400

Table 5 Response body parameters

Parameter

Type

Description

error_code

String

Explanation

Error code.

Constraints

N/A

Value range

N/A

Default value

N/A

error_msg

String

Explanation

Error message.

Constraints

N/A

Value range

N/A

Default value

N/A

Example Request

You must have prepared the OBS paths, sample files, endpoints, and AKs/SKs when submitting a request.

  • Create a MapReduce job.
    POST https://{endpoint}/v2/{project_id}/clusters/{cluster_id}/job-executions
    
    {
        "job_name":"MapReduceTest",
        "job_type":"MapReduce",
        "arguments":[
            "obs://obs-test/program/hadoop-mapreduce-examples-x.x.x.jar",
            "wordcount",
            "obs://obs-test/input/",
            "obs://obs-test/job/mapreduce/output"
        ],
        "properties":{
            "fs.obs.endpoint":"obs endpoint",
            "fs.obs.access.key":"xxx",
            "fs.obs.secret.key":"yyy"
        }
    }
  • Create a SparkSubmit job.
    POST https://{endpoint}/v2/{project_id}/clusters/{cluster_id}/job-executions
    
    {
        "job_name":"SparkSubmitTest",
        "job_type":"SparkSubmit",
        "arguments":[
            "--master",
            "yarn",
            "--deploy-mode",
            "cluster",
            "--py-files",
            "obs://obs-test/a.py",
            "--conf",
            "spark.yarn.appMasterEnv.PYTHONPATH=/tmp:$PYTHONPATH",
            "--conf",
            "spark.yarn.appMasterEnv.aaa=aaaa",
            "--conf",
            "spark.executorEnv.aaa=executoraaa",
            "--properties-file",
            "obs://obs-test/test-spark.conf",
            "obs://obs-test/pi.py",
            "100000"
        ],
        "properties":{
            "fs.obs.access.key":"xxx",
            "fs.obs.secret.key":"yyy"
        }
    }
  • Create a HiveScript job.
    POST https://{endpoint}/v2/{project_id}/clusters/{cluster_id}/job-executions
    
    {
        "job_name":"HiveScriptTest",
        "job_type":"HiveScript",
        "arguments":[
            "obs://obs-test/sql/test_script.sql"
        ],
        "properties":{
            "fs.obs.endpoint":"obs endpoint",
            "fs.obs.access.key":"xxx",
            "fs.obs.secret.key":"yyy"
        }
    }
  • Create a HiveSql job.
    POST https://{endpoint}/v2/{project_id}/clusters/{cluster_id}/job-executions
    
    {
      "job_name" : "HiveSqlTest",
      "job_type" : "HiveSql",
      "arguments" : [ "DROP TABLE IF EXISTS src_wordcount;\ncreate external table src_wordcount(line string) row format delimited fields terminated by \"\\n\" stored as textfile location \"obs://donotdel-gxc/input/\";\ninsert into src_wordcount values(\"v1\")" ],
      "properties" : {
        "fs.obs.endpoint" : "obs endpoint",
        "fs.obs.access.key" : "xxx",
        "fs.obs.secret.key" : "yyy"
      }
    }
  • Create a DistCp job.
    POST https://{endpoint}/v2/{project_id}/clusters/{cluster_id}/job-executions
    
    {
        "job_name":"DistCpTest",
        "job_type":"DistCp",
        "arguments":[
            "obs://obs-test/DistcpJob/",
            "/user/test/sparksql/"
        ],
        "properties":{
            "fs.obs.endpoint":"obs endpoint",
            "fs.obs.access.key":"xxx",
            "fs.obs.secret.key":"yyy"
        }
    }
  • Create a SparkScript job.
    POST https://{endpoint}/v2/{project_id}/clusters/{cluster_id}/job-executions
    
    {
        "job_name":"SparkScriptTest",
        "job_type":"SparkScript",
        "arguments":[
            "op-key1",
            "op-value1",
            "op-key2",
            "op-value2",
            "obs://obs-test/sql/test_script.sql"
        ],
        "properties":{
            "fs.obs.access.key":"xxx",
            "fs.obs.secret.key":"yyy"
        }
    } 
  • Create a SparkSql job.
    POST https://{endpoint}/v2/{project_id}/clusters/{cluster_id}/job-executions
    
    {
        "job_name":"SparkSqlTest",
        "job_type":"SparkSql",
        "arguments":[
            "op-key1",
            "op-value1",
            "op-key2",
            "op-value2",
            "create table student_info3 (id string,name string,gender string,age int,addr string);"
        ],
        "properties":{
            "fs.obs.access.key":"xxx",
            "fs.obs.secret.key":"yyy"
        }
    } 
  • Create a Flink job.
    POST https://{endpoint}/v2/{project_id}/clusters/{cluster_id}/job-executions
    
    {
        "job_name":"FlinkTest",
        "job_type":"Flink",
        "arguments":[
            "run",
            "-d",
            "-ynm",
            "testExcutorejobhdfsbatch",
            "-m",
            "yarn-cluster",
            "hdfs://test/examples/batch/WordCount.jar"
        ],
        "properties":{
            "fs.obs.endpoint":"obs endpoint",
            "fs.obs.access.key":"xxx",
            "fs.obs.secret.key":"yyy"
        }
    }
  • Create a SparkPython job (Jobs of this type will be converted to SparkSubmit jobs for submission. The job type is displayed as SparkSubmit on the MRS console. Select SparkSubmit when you call an API to query the job list.)
    POST https://{endpoint}/v2/{project_id}/clusters/{cluster_id}/job-executions
    
    {
      "job_name" : "SparkPythonTest",
      "job_type" : "SparkPython",
      "arguments" : [ "--master", "yarn", "--deploy-mode", "cluster", "--py-files", "obs://obs-test/a.py", "--conf", "spark.yarn.appMasterEnv.PYTHONPATH=/tmp:$PYTHONPATH", "--conf", "spark.yarn.appMasterEnv.aaa=aaaa", "--conf", "spark.executorEnv.aaa=executoraaa", "--properties-file", "obs://obs-test/test-spark.conf", "obs://obs-test/pi.py", "100000" ],
      "properties" : {
        "fs.obs.access.key" : "xxx",
        "fs.obs.secret.key" : "yyy"
      }
    }

Example Response

Status code: 200

  • Example of a successful response
    {
      "job_submit_result":{
          "job_id":"44b37a20-ffe8-42b1-b42b-78a5978d7e40",
          "state":"COMPLETE"
      }
    }

Status code: 400

  • Example of a failed response
    {
    "error_msg": Hive jobs cannot be submitted.
    "error_code":"0168"
    }

Status Codes

See Status Codes.

Error Codes

See Error Codes.