Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive

Adding and Executing a Job (Deprecated)

Updated on 2024-12-10 GMT+08:00

Function

This API is used to add a job to an MRS cluster and execute the job. This API is incompatible with Sahara.

URI

  • Format

    POST /v1.1/{project_id}/jobs/submit-job

  • Parameter description
    Table 1 URI parameter

    Parameter

    Mandatory

    Description

    project_id

    Yes

    Explanation

    The project ID. For details about how to obtain the project ID, see Obtaining a Project ID.

    Constraints

    N/A

    Value range

    N/A

    Default value

    N/A

Request Parameters

Table 2 Request body parameters

Parameter

Mandatory

Type

Description

job_name

Yes

String

Explanation

Job name

Constraints

N/A

Value range

A cluster name can contain 1 to 64 characters. Only letters, digits, hyphens (-), and underscores (_) are allowed.

Identical job names are allowed but not recommended.

Default value

N/A

cluster_id

Yes

String

Explanation

Cluster ID

Constraints

N/A

Value range

N/A

Default value

N/A

jar_path

No

String

Explanation

Path of the .jar file or .sql file to be executed.

Constraints

The jar_path parameter is mandatory for MapReduce or Spark jobs.

Value range

  • The path can contain a maximum of 1,023 characters, excluding special characters such as ;|&><'$. The address cannot be empty or full of spaces.
  • This path must start with "/" or "s3a://". The OBS path does not support files or programs encrypted by KMS.
  • Spark Script must end with .sql while MapReduce and Spark JAR must end with .jar. sql and jar are case-insensitive.

Default value

N/A

input

No

String

Explanation

Address for inputting data

Constraints

N/A

Value range

This path must start with "/" or "s3a://". Set this parameter to a correct OBS path. The OBS path does not support files or programs encrypted by KMS.

The value can contain a maximum of 1,023 characters, excluding special characters such as ;|&>'<$, and can be left blank.

Default value

N/A

output

No

String

Explanation

Address for outputting data

Constraints

N/A

Value range

This path must start with "/" or "s3a://". A correct OBS path is required. If the path does not exist, the system automatically creates it.

The value can contain a maximum of 1,023 characters and can be left blank. Special characters such as ;|&>'<$ are not allowed.

Default value

N/A

job_log

No

String

Explanation

Path for storing job logs that record job status.

Constraints

N/A

Value range

This path must start with "/" or "s3a://".

The value can contain a maximum of 1,023 characters and can be left blank. Special characters such as ;|&>'<$ are not allowed.

Default value

N/A

job_type

Yes

Integer

Explanation

Job type code

Constraints

N/A

Value range

  • 1: MapReduce
  • 2: Spark
  • 3: Hive Script
  • 4: HiveSQL (not supported currently)
  • 5: DistCp, importing and exporting data
  • 6: Spark Script
  • 7: Spark SQL, submitting SQL statements (not supported currently)

Default value

N/A

file_action

No

String

Explanation

File operation type.

Constraints

N/A

Value range

  • export: Export data from HDFS to OBS.
  • import: Import data from OBS to HDFS.

Default value

N/A

arguments

No

String

Explanation

Key parameters for program execution.

Constraints

This parameter is specified by a function in your program. MRS is responsible for passing the parameter only.

Value range

The value can contain a maximum of 150,000 characters. Special characters (;|&>'<$!"\) are not allowed. This parameter can be left blank. When entering a parameter containing sensitive information (for example, login password), add an at sign (@) before the parameter name to encrypt the parameter value. This prevents the sensitive information from being persisted in plaintext. Therefore, when you view job information on the MRS, sensitive information will be displayed as asterisks (*). Example: username=xxx @password=yyy

Default value

N/A

hql

No

String

Explanation

Spark SQL statement.

Constraints

The statement needs Base64 encoding and decoding. ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/ is a standard encoding table. MRS uses this table for Base64 encoding. The value of the hql parameter is generated by adding any letter to the beginning of the encoded character string. The Spark SQL statement is generated by decoding the value in the background. Example:

  1. Enter Spark SQL statement show tables;.
  2. Encode it using ABCDEFGHILKJMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/, and obtain the encoded string c2hvdyB0YWLsZXM7.
  3. Prefix the c2hvdyB0YWLsZXM7 string with a random letter (for example, g), and then you get gc2hvdyB0YWLsZXM7.
  4. The backend automatically decodes gc2hvdyB0YWLsZXM7 and you obtain Spark SQL statement show tables;.

Value range

N/A

Default value

N/A

hive_script_path

No

String

Explanation

Path of the SQL program.

Constraints

This parameter is required only for Spark Script and Hive Script jobs.

Value range

  • The path can contain a maximum of 1,023 characters, excluding special characters such as ;|&><'$. The address cannot be empty or full of spaces.
  • The path must start with / or s3a://. The OBS path does not support files or programs encrypted by KMS.
  • The path must end with .sql. sql is case-insensitive.

Default value

N/A

Parameters

Status code: 200

Table 3 Response body parameter

Parameter

Type

Description

templated

Boolean

Explanation

Whether job execution objects are generated by job templates.

Constraints

N/A

Value range

N/A

Default value

N/A

created_at

Long

Explanation

Creation time, which is a 10-bit timestamp.

Constraints

N/A

Value range

N/A

Default value

N/A

updated_at

Long

Explanation

Update time, which is a 10-bit timestamp.

Constraints

N/A

Value range

N/A

Default value

N/A

id

String

Explanation

The job ID.

Constraints

N/A

Value range

N/A

Default value

N/A

tenant_id

String

Explanation

The project ID. For details about how to obtain the project ID, see Obtaining a Project ID.

Constraints

N/A

Value range

N/A

Default value

N/A

job_id

String

Explanation

Job application ID

Constraints

N/A

Value range

N/A

Default value

N/A

job_name

String

Explanation

Job name

Constraints

N/A

Value range

A cluster name can contain 1 to 64 characters. Only letters, digits, hyphens (-), and underscores (_) are allowed.

Identical job names are allowed but not recommended.

Default value

N/A

input_id

String

Explanation

Data input ID

Constraints

N/A

Value range

N/A

Default value

N/A

output_id

String

Explanation

Data output ID

Constraints

N/A

Value range

N/A

Default value

N/A

start_time

Long

Explanation

Start time of job execution, which is a 10-bit timestamp.

Constraints

N/A

Value range

N/A

Default value

N/A

end_time

Long

Explanation

End time of job execution, which is a 10-bit timestamp.

Constraints

N/A

Value range

N/A

Default value

N/A

cluster_id

String

Explanation

Cluster ID

Constraints

N/A

Value range

N/A

Default value

N/A

engine_job_id

String

Explanation

Workflow ID of Oozie

Constraints

N/A

Value range

N/A

Default value

N/A

return_code

String

Explanation

Returned code for an execution result

Constraints

N/A

Value range

N/A

Default value

N/A

is_public

Boolean

Explanation

Whether a job is public

Constraints

The current version does not support this function.

Value range

N/A

Default value

N/A

is_protected

Boolean

Explanation

Whether a job is protected

Constraints

The current version does not support this function.

Value range

N/A

Default value

N/A

group_id

String

Explanation

Group ID of a job

Constraints

N/A

Value range

N/A

Default value

N/A

jar_path

String

Explanation

Path of the .jar file or .sql file to be executed.

Constraints

N/A

Value range

  • The path can contain a maximum of 1,023 characters, excluding special characters such as ;|&><'$. The address cannot be empty or full of spaces.
  • This path must start with / or s3a://. The OBS path does not support files or programs encrypted by KMS.
  • Spark Script must end with .sql while MapReduce and Spark JAR must end with .jar. sql and jar are case-insensitive.

Default value

N/A

input

String

Explanation

Address for inputting data

Constraints

Set this parameter to a correct OBS path. The OBS path does not support files or programs encrypted by KMS.

Value range

This path must start with / or s3a://.

The value can contain a maximum of 1,023 characters and can be left blank. Special characters such as ;|&>'<$ are not allowed.

Default value

N/A

output

String

Explanation

Address for outputting data

Constraints

N/A

Value range

This path must start with / or s3a://. A correct OBS path is required. If the path does not exist, the system automatically creates it.

The value can contain a maximum of 1,023 characters and can be left blank. Special characters such as ;|&>'<$ are not allowed.

Default value

N/A

job_log

String

Explanation

Path for storing job logs that record job running status.

Constraints

N/A

Value range

This path must start with / or s3a://.

The value can contain a maximum of 1,023 characters and can be left blank. Special characters such as ;|&>'<$ are not allowed.

Default value

N/A

job_type

Integer

Explanation

Job type code

Constraints

Spark and Hive jobs can be added to only clusters that include Spark and Hive components.

Value range

  • 1: MapReduce
  • 2: Spark
  • 3: Hive Script
  • 4: HiveSQL (not supported currently)
  • 5: DistCp, importing and exporting data
  • 6: Spark Script
  • 7: Spark SQL, submitting SQL statements (not supported currently)

Default value

N/A

file_action

String

Explanation

File operation type

Constraints

N/A

Value range

  • export: Export data from HDFS to OBS.
  • import: Import data from OBS to HDFS.

Default value

N/A

arguments

String

Explanation

Key parameters for program execution.

Constraints

This parameter is specified by a function in your program. MRS is responsible for passing the parameter only.

Value range

The value can contain a maximum of 150,000 characters. Special characters (;|&>'<$!"\) are not allowed. This parameter can be left blank. When entering a parameter containing sensitive information (for example, login password), add an at sign (@) before the parameter name to encrypt the parameter value. This prevents the sensitive information from being persisted in plaintext. Therefore, when you view job information on the MRS, sensitive information will be displayed as asterisks (*). For example, username=admin @password=***

Default value

N/A

hql

String

Explanation

Hive&Spark SQL statement.

Constraints

N/A

Value range

N/A

Default value

N/A

job_state

Integer

Explanation

Job status code

Constraints

N/A

Value range

  • -1: Terminated
  • 1: Starting
  • 2: Running
  • 3: Completed
  • 4: Abnormal
  • 5: Error

Default value

N/A

job_final_status

Integer

Explanation

Final job status

Constraints

N/A

Value range

  • 0: unfinished
  • 1: terminated due to an execution error
  • 2: executed successfully
  • 3: canceled

Default value

N/A

hive_script_path

String

Explanation

Path of the SQL program.

Constraints

This parameter is required only for Spark Script and Hive Script jobs.

Value range

  • The path can contain a maximum of 1,023 characters, excluding special characters such as ;|&><'$. The address cannot be empty or full of spaces.
  • The path must start with / or s3a://. The OBS path does not support files or programs encrypted by KMS.
  • The path must end with .sql. sql is case-insensitive.

Default value

N/A

create_by

String

Explanation

User ID for creating jobs

This parameter is not used in the current version, but is retained for compatibility with earlier versions.

Constraints

N/A

Value range

N/A

Default value

N/A

finished_step

Integer

Explanation

Number of completed steps

This parameter is not used in the current version, but is retained for compatibility with earlier versions.

Constraints

N/A

Value range

N/A

Default value

N/A

job_main_id

String

Explanation

Main ID of a job

This parameter is not used in the current version, but is retained for compatibility with earlier versions.

Constraints

N/A

Value range

N/A

Default value

N/A

job_step_id

String

Explanation

Step ID of a job

This parameter is not used in the current version, but is retained for compatibility with earlier versions.

Constraints

N/A

Value range

N/A

Default value

N/A

postpone_at

Long

Explanation

Delay time, which is a 10-bit timestamp.

This parameter is not used in the current version, but is retained for compatibility with earlier versions.

Constraints

N/A

Value range

N/A

Default value

N/A

step_name

String

Explanation

Step name of a job

This parameter is not used in the current version, but is retained for compatibility with earlier versions.

Constraints

N/A

Value range

N/A

Default value

N/A

step_num

Integer

Explanation

Number of steps

This parameter is not used in the current version, but is retained for compatibility with earlier versions.

Constraints

N/A

Value range

N/A

Default value

N/A

task_num

Integer

Explanation

Number of tasks This parameter is not used in the current version, but is retained for compatibility with earlier versions.

Constraints

N/A

Value range

N/A

Default value

N/A

update_by

String

Explanation

User ID for updating jobs

Constraints

N/A

Value range

N/A

Default value

N/A

credentials

String

Explanation

Token, which is not supported in the current version.

Constraints

N/A

Value range

N/A

Default value

N/A

user_id

String

Explanation

User ID for creating jobs

This parameter is not used in the current version, but is retained for compatibility with earlier versions.

Constraints

N/A

Value range

N/A

Default value

N/A

job_configs

Map<String,Object>

Explanation

Key-value pair set for saving job running configurations

Constraints

N/A

Value range

N/A

Default value

N/A

extra

Map<String,Object>

Explanation

Authentication information, which is not supported in the current version.

Constraints

N/A

Value range

N/A

Default value

N/A

data_source_urls

Map<String,Object>

Explanation

Data source URL

Constraints

N/A

Value range

N/A

Default value

N/A

info

Map<String,Object>

Explanation

Key-value pair set, containing job running information returned by Oozie.

Constraints

N/A

Value range

N/A

Default value

N/A

Example request

  • Example request of a MapReduce job
    POST https://{endpoint}/v1.1/{project_id}/jobs/submit-job
    
    {
      "job_type" : 1,
      "job_name" : "mrs_test_jobone_20170602_141106",
      "cluster_id" : "e955a7a3-d334-4943-a39a-994976900d56",
      "jar_path" : "s3a://mrs-opsadm/jarpath/hadoop-mapreduce-examples-2.7.2.jar",
      "arguments" : "wordcount",
      "input" : "s3a://mrs-opsadm/input/",
      "output" : "s3a://mrs-opsadm/output/",
      "job_log" : "s3a://mrs-opsadm/log/",
      "file_action" : "",
      "hql" : "",
      "hive_script_path" : ""
    }
  • Example request of a Spark job
    POST https://{endpoint}/v1.1/{project_id}/jobs/submit-job
    
    {
      "job_type" : 2,
      "job_name" : "mrs_test_sparkjob_20170602_141106",
      "cluster_id" : "e955a7a3-d334-4943-a39a-994976900d56",
      "jar_path" : "s3a://mrs-opsadm/jarpath/spark-test.jar",
      "arguments" : "org.apache.spark.examples.SparkPi 10",
      "input" : "",
      "output" : "s3a://mrs-opsadm/output/",
      "job_log" : "s3a://mrs-opsadm/log/",
      "file_action" : "",
      "hql" : "",
      "hive_script_path" : ""
    }
  • Example request of a Hive Script job
    POST https://{endpoint}/v1.1/{project_id}/jobs/submit-job
    
    {
      "job_type" : 3,
      "job_name" : "mrs_test_SparkScriptJob_20170602_141106",
      "cluster_id" : "e955a7a3-d334-4943-a39a-994976900d56",
      "jar_path" : "s3a://mrs-opsadm/jarpath/Hivescript.sql",
      "arguments" : "",
      "input" : "s3a://mrs-opsadm/input/",
      "output" : "s3a://mrs-opsadm/output/",
      "job_log" : "s3a://mrs-opsadm/log/",
      "file_action" : "",
      "hql" : "",
      "hive_script_path" : "s3a://mrs-opsadm/jarpath/Hivescript.sql"
    }
  • Example request for importing a DistCp job
    POST https://{endpoint}/v1.1/{project_id}/jobs/submit-job
    
    {
      "job_type" : 5,
      "job_name" : "mrs_test_importjob_20170602_141106",
      "cluster_id" : "e955a7a3-d334-4943-a39a-994976900d56",
      "input" : "s3a://mrs-opsadm/jarpath/hadoop-mapreduce-examples-2.7.2.jar",
      "output" : "/user",
      "file_action" : "import"
    }
  • Example request for exporting a DistCp job
    POST https://{endpoint}/v1.1/{project_id}/jobs/submit-job
    
    {
      "job_type" : 5,
      "job_name" : "mrs_test_exportjob_20170602_141106",
      "cluster_id" : "e955a7a3-d334-4943-a39a-994976900d56",
      "input" : "/user/hadoop-mapreduce-examples-2.7.2.jar",
      "output" : "s3a://mrs-opsadm/jarpath/",
      "file_action" : "export"
    }
  • Example request of a Spark Script job
    POST https://{endpoint}/v1.1/{project_id}/jobs/submit-job
    
    {
      "job_type" : 6,
      "job_name" : "mrs_test_sparkscriptjob_20170602_141106",
      "cluster_id" : "e955a7a3-d334-4943-a39a-994976900d56",
      "jar_path" : "s3a://mrs-opsadm/jarpath/sparkscript.sql",
      "arguments" : "",
      "input" : "s3a://mrs-opsadm/input/",
      "output" : "s3a://mrs-opsadm/output/",
      "job_log" : "s3a://mrs-opsadm/log/",
      "file_action" : "",
      "hql" : "",
      "hive_script_path" : "s3a://mrs-opsadm/jarpath/sparkscript.sql"
    }

Example Response

Status code: 200

The job has been added.

{
  "job_execution" : {
    "templated" : "false",
    "created_at" : "1496387588",
    "updated_at" : "1496387588",
    "id" : "12ee9ae4-6ee1-48c6-bb84-fb0b4f76cf03",
    "tenant_id" : "c71ad83a66c5470496c2ed6e982621cc",
    "job_id" : "",
    "job_name" : "mrs_test_jobone_20170602_141106",
    "input_id" : null,
    "output_id" : null,
    "start_time" : "1496387588",
    "end_time" : null,
    "cluster_id" : "e955a7a3-d334-4943-a39a-994976900d56",
    "engine_job_id" : null,
    "return_code" : null,
    "is_public" : null,
    "is_protected" : null,
    "group_id" : "12ee9ae4-6ee1-48c6-bb84-fb0b4f76cf03",
    "jar_path" : "s3a://mrs-opsadm/jarpath/hadoop-mapreduce-examples-2.7.2.jar",
    "input" : "s3a://mrs-opsadm/input/",
    "output" : "s3a://mrs-opsadm/output/",
    "job_log" : "s3a://mrs-opsadm/log/",
    "job_type" : "1",
    "file_action" : "",
    "arguments" : "wordcount",
    "hql" : "",
    "job_state" : "2",
    "job_final_status" : "0",
    "hive_script_path" : "",
    "create_by" : "b67132be2f054a45b247365647e05af0",
    "finished_step" : "0",
    "job_main_id" : "",
    "job_step_id" : "",
    "postpone_at" : "1496387588",
    "step_name" : "",
    "step_num" : "0",
    "task_num" : "0",
    "update_by" : "b67132be2f054a45b247365647e05af0",
    "credentials" : "",
    "user_id" : "b67132be2f054a45b247365647e05af0",
    "job_configs" : null,
    "extra" : null,
    "data_source_urls" : null,
    "info" : null
  }
}

Status Codes

Table 4 describes the status code.

Table 4 Status code

Status Code

Description

200

The job has been added.

See Status Codes.

Error Codes

For details, see Error Codes.

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback