Adding and Executing a Job
Function
This API is used to add and submit a job in an MRS cluster.
- If you want to use the OBS encryption function, follow instructions in Using OBS to Encrypt Data for Running Jobs to configure related information and call an API to run a job.
- On the Dashboard tab page of the cluster details page, click Click to synchronize on the right side of IAM User Sync to synchronize IAM users. Then submit a job through this API.
Constraints
None
Debugging
You can debug this API in API Explorer. Automatic authentication is supported. API Explorer can automatically generate sample SDK code and provide the sample SDK code debugging.
URI
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
project_id |
Yes |
String |
Explanation Project ID. For details about how to obtain the project ID, see Obtaining a Project ID. Constraints N/A Value range The value must consist of 1 to 64 characters. Only letters and digits are allowed. Default value N/A |
cluster_id |
Yes |
String |
Explanation Cluster ID. For details on how to obtain the cluster ID, see Obtaining a Cluster ID. Constraints N/A Value range The value can contain 1 to 64 characters, including only letters, digits, underscores (_), and hyphens (-). Default value N/A |
Request Parameters
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
job_type |
Yes |
String |
Explanation The job type. Constraints N/A Value range
Default value N/A
NOTE:
Spark, Hive, and Flink jobs can be added to only clusters that include Spark, Hive, and Flink components. |
job_name |
Yes |
String |
Explanation Job name. Constraints N/A Value range The value can contain 1 to 64 characters, including only letters, digits, underscores (_), and hyphens (-). Identical job names are allowed but not recommended. Default value N/A |
arguments |
No |
Array of strings |
Explanation Key parameter for program execution. The parameter is specified by the function of the user's program. MRS is only responsible for loading the parameter. Constraints The value can contain a maximum of 150,000 characters. Special characters (;|&>'<$!\\) are not allowed. This parameter can be left blank.
NOTE:
|
properties |
No |
Map<String,String> |
Explanation Program system parameter. Constraints The parameter contains a maximum of 2,048 characters, excluding special characters such as ><|'`&!\, and can be left blank. |
Response Parameters
Status code: 200
Parameter |
Type |
Description |
---|---|---|
job_submit_result |
JobSubmitResult object |
Explanation The job execution result. For details about the parameters, see Table 4. |
Parameter |
Type |
Description |
---|---|---|
job_id |
String |
Explanation Job ID Value range N/A |
state |
String |
Explanation Job submission status. Value range
|
Status code: 400
Parameter |
Type |
Description |
---|---|---|
error_code |
String |
Explanation Error code. Value range N/A |
error_msg |
String |
Explanation Error message. Value range N/A |
Example Request
You must have prepared the OBS paths, sample files, endpoints, and AKs/SKs when submitting a request.
- Create a MapReduce job.
POST https://{endpoint}/v2/{project_id}/clusters/{cluster_id}/job-executions { "job_name":"MapReduceTest", "job_type":"MapReduce", "arguments":[ "obs://obs-test/program/hadoop-mapreduce-examples-x.x.x.jar", "wordcount", "obs://obs-test/input/", "obs://obs-test/job/mapreduce/output" ], "properties":{ "fs.obs.endpoint":"obs endpoint", "fs.obs.access.key":"xxx", "fs.obs.secret.key":"yyy" } }
- Create a SparkSubmit job.
POST https://{endpoint}/v2/{project_id}/clusters/{cluster_id}/job-executions { "job_name":"SparkSubmitTest", "job_type":"SparkSubmit", "arguments":[ "--master", "yarn", "--deploy-mode", "cluster", "--py-files", "obs://obs-test/a.py", "--conf", "spark.yarn.appMasterEnv.PYTHONPATH=/tmp:$PYTHONPATH", "--conf", "spark.yarn.appMasterEnv.aaa=aaaa", "--conf", "spark.executorEnv.aaa=executoraaa", "--properties-file", "obs://obs-test/test-spark.conf", "obs://obs-test/pi.py", "100000" ], "properties":{ "fs.obs.access.key":"xxx", "fs.obs.secret.key":"yyy" } }
- Create a HiveScript job.
POST https://{endpoint}/v2/{project_id}/clusters/{cluster_id}/job-executions { "job_name":"HiveScriptTest", "job_type":"HiveScript", "arguments":[ "obs://obs-test/sql/test_script.sql" ], "properties":{ "fs.obs.endpoint":"obs endpoint", "fs.obs.access.key":"xxx", "fs.obs.secret.key":"yyy" } }
- Create a HiveSql job.
POST https://{endpoint}/v2/{project_id}/clusters/{cluster_id}/job-executions { "job_name" : "HiveSqlTest", "job_type" : "HiveSql", "arguments" : [ "DROP TABLE IF EXISTS src_wordcount;\ncreate external table src_wordcount(line string) row format delimited fields terminated by \"\\n\" stored as textfile location \"obs://donotdel-gxc/input/\";\ninsert into src_wordcount values(\"v1\")" ], "properties" : { "fs.obs.endpoint" : "obs endpoint", "fs.obs.access.key" : "xxx", "fs.obs.secret.key" : "yyy" } }
- Create a DistCp job.
POST https://{endpoint}/v2/{project_id}/clusters/{cluster_id}/job-executions { "job_name":"DistCpTest", "job_type":"DistCp", "arguments":[ "obs://obs-test/DistcpJob/", "/user/test/sparksql/" ], "properties":{ "fs.obs.endpoint":"obs endpoint", "fs.obs.access.key":"xxx", "fs.obs.secret.key":"yyy" } }
- Create a SparkScript job.
POST https://{endpoint}/v2/{project_id}/clusters/{cluster_id}/job-executions { "job_name":"SparkScriptTest", "job_type":"SparkScript", "arguments":[ "op-key1", "op-value1", "op-key2", "op-value2", "obs://obs-test/sql/test_script.sql" ], "properties":{ "fs.obs.access.key":"xxx", "fs.obs.secret.key":"yyy" } }
- Create a SparkSql job.
POST https://{endpoint}/v2/{project_id}/clusters/{cluster_id}/job-executions { "job_name":"SparkSqlTest", "job_type":"SparkSql", "arguments":[ "op-key1", "op-value1", "op-key2", "op-value2", "create table student_info3 (id string,name string,gender string,age int,addr string);" ], "properties":{ "fs.obs.access.key":"xxx", "fs.obs.secret.key":"yyy" } }
- Create a Flink job.
POST https://{endpoint}/v2/{project_id}/clusters/{cluster_id}/job-executions { "job_name":"FlinkTest", "job_type":"Flink", "arguments":[ "run", "-d", "-ynm", "testExcutorejobhdfsbatch", "-m", "yarn-cluster", "hdfs://test/examples/batch/WordCount.jar" ], "properties":{ "fs.obs.endpoint":"obs endpoint", "fs.obs.access.key":"xxx", "fs.obs.secret.key":"yyy" } }
- Cerate a SparkPython job (Jobs of this type will be converted to SparkSubmit jobs for submission. The job type is displayed as SparkSubmit on the MRS console. Select SparkSubmit when you call an API to query the job list.)
POST https://{endpoint}/v2/{project_id}/clusters/{cluster_id}/job-executions { "job_name" : "SparkPythonTest", "job_type" : "SparkPython", "arguments" : [ "--master", "yarn", "--deploy-mode", "cluster", "--py-files", "obs://obs-test/a.py", "--conf", "spark.yarn.appMasterEnv.PYTHONPATH=/tmp:$PYTHONPATH", "--conf", "spark.yarn.appMasterEnv.aaa=aaaa", "--conf", "spark.executorEnv.aaa=executoraaa", "--properties-file", "obs://obs-test/test-spark.conf", "obs://obs-test/pi.py", "100000" ], "properties" : { "fs.obs.access.key" : "xxx", "fs.obs.secret.key" : "yyy" } }
Example Response
Status code: 200
- Example of a successful response
{ "job_submit_result":{ "job_id":"44b37a20-ffe8-42b1-b42b-78a5978d7e40", "state":"COMPLETE" } }
Status code: 400
- Example of a failed response
{ "error_msg": Hive jobs cannot be submitted. "error_code":"0168" }
Status Codes
See Status Codes.
Error Codes
See Error Codes.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot