Help Center/ MapReduce Service/ API Reference/ API V1.1/ Cluster Management APIs/ Creating a Cluster and Executing a Job
Updated on 2024-12-10 GMT+08:00

Creating a Cluster and Executing a Job

Function

This API is used to create an MRS cluster and submit a job in the cluster. This API is incompatible with Sahara.

You are advised to use the V2 APIs for creating a cluster and creating a cluster and submitting a job.

A maximum of 10 clusters can be concurrently created. You can set the enterprise_project_id parameter to perform fine-grained authorization for resources.

Before using the API, you need to obtain the resources listed in Table 1.

Table 1 Obtaining resources

Resource

How to Obtain

VPC

See operation instructions in Querying VPCs and Creating a VPC in the VPC API Reference.

Subnet

See operation instructions in Querying Subnets and Creating a Subnet in the VPC API Reference.

Key Pair

See operation instructions in Querying SSH Key Pairs and Creating and Importing an SSH Key Pair in the ECS API Reference.

Zone

See Endpoints for details about regions and AZs.

Version

Currently, MRS 1.9.2, 3.1.0, 3.1.5, 3.1.2-LTS.3, and 3.2.0-LTS.1 are supported.

Component

  • MRS 3.2.0-LTS.1 supports the following components:
    • An analysis cluster contains the following components: Hadoop, Spark2x, HBase, Hive, Hue, Loader, Flink, Oozie, ZooKeeper, HetuEngine, Ranger, Tez, and Guardian.
    • A streaming cluster contains the following components: Kafka, Flume, ZooKeeper, and Ranger.
    • A hybrid cluster contains the following components: Hadoop, Spark2x, HBase, Hive, Hue, Loader, Flink, Oozie, ZooKeeper, HetuEngine, Ranger, Tez, Kafka, Flume, and Guardian.
    • A custom cluster contains the following components: CDL, Hadoop, Spark2x, HBase, Hive, Hue, IoTDB, Loader, Kafka, Flume, Flink, Oozie, ZooKeeper, HetuEngine, Ranger, Tez, and ClickHouse, and Guardian.
  • MRS 3.1.5 supports the following components:
    • An analysis cluster contains the following components: Hadoop, Spark2x, HBase, Hive, Hue, Flink, Oozie, ZooKeeper, Ranger, Tez, Impala, Presto, Kudu, Sqoop, and Guardian
    • A streaming cluster contains the following components: Kafka, Flume, ZooKeeper, and Ranger.
    • A hybrid cluster contains the following components: Hadoop, Spark2x, HBase, Hive, Hue, Flink, Oozie, ZooKeeper, Ranger, Tez, Impala, Presto, Kudu, Sqoop, Guardian, Kafka, and Flume.
    • A custom cluster contains the following components: Hadoop, Spark2x, HBase, Hive, Hue, Kafka, Flume, Flink, Oozie, ZooKeeper, Ranger, Tez, Impala, Presto, ClickHouse, Kudu, Sqoop, and Guardian.
  • MRS 3.1.2-LTS.3 supports the following components:
    • An analysis cluster contains the following components: Hadoop, Spark2x, HBase, Hive, Hue, Loader, Flink, Oozie, ZooKeeper, HetuEngine, Ranger, and Tez.
    • A streaming cluster contains the following components: Kafka, Flume, ZooKeeper, and Ranger.
    • A hybrid cluster contains the following components: Hadoop, Spark2x, HBase, Hive, Hue, Loader, Flink, Oozie, ZooKeeper, HetuEngine, Ranger, Tez, Kafka, and Flume.
    • A custom cluster contains the following components: Hadoop, Spark2x, HBase, Hive, Hue, Loader, Kafka, Flume, Flink, Oozie, ZooKeeper, HetuEngine, Ranger, Tez, and ClickHouse.
  • MRS 3.1.0 supports the following components:
    • An analysis cluster contains the following components: Hadoop, Spark2x, HBase, Hive, Hue, Flink, Oozie, ZooKeeper, Ranger, Tez, Impala, Presto, and Kudu.
    • A streaming cluster contains the following components: Kafka, Flume, ZooKeeper, and Ranger.
    • A hybrid cluster contains the following components: Hadoop, Spark2x, HBase, Hive, Hue, Flink, Oozie, ZooKeeper, Ranger, Tez, Impala, Presto, Kudu, Kafka, and Flume.
    • A custom cluster contains the following components: Hadoop, Spark2x, HBase, Hive, Hue, Kafka, Flume, Flink, Oozie, ZooKeeper, Ranger, Tez, Impala, Presto, ClickHouse, and Kudu.
  • MRS 3.0.5 supports the following components:
    • An analysis cluster contains the following components: Hadoop, Spark2x, HBase, Hive, Hue, Loader, Flink, Oozie, ZooKeeper, Ranger, Tez, Impala, Presto, Kudu, and Alluxio.
    • A streaming cluster contains the following components: Kafka, Storm, Flume, ZooKeeper, and Ranger.
    • A hybrid cluster contains the following components: Hadoop, Spark2x, HBase, Hive, Hue, Loader, Flink, Oozie, ZooKeeper, Ranger, Tez, Impala, Presto, Kudu, Alluxio, Kafka, Storm, and Flume.
    • A custom cluster contains the following components: Hadoop, Spark2x, HBase, Hive, Hue, Loader, Kafka, Storm, Flume, Flink, Oozie, ZooKeeper, Ranger, Tez, Impala, Presto, ClickHouse, Kudu, and Alluxio.
  • MRS 2.1.0 supports the following components:
    • An analysis cluster contains the following components: Presto, Hadoop, Spark, HBase, Hive, Hue, Loader, Tez, Impala, Kudu, and Flink.
    • A streaming cluster contains the following components: Kafka, Storm, and Flume.
  • MRS 1.9.2 supports the following components:
    • An analysis cluster contains the following components: Presto, Hadoop, Spark, HBase, OpenTSDB, Hive, Hue, Loader, Tez, Flink, Alluxio, and Ranger.
    • A streaming cluster contains the following components: Kafka, KafkaManager, Storm, and Flume.

Constraints

  • You can log in to a cluster using either a password or a key pair.
  • To use the password mode, you need to configure the password of user root for accessing the cluster node, that is, cluster_master_secret.
  • To use the key pair mode, you need to configure the key pair name, that is, node_public_cert_name.
  • Disk parameters can be represented either by volume_type and volume_size, or by multi-disk parameters (master_data_volume_type, master_data_volume_size, master_data_volume_count, core_data_volume_type, core_data_volume_size, and core_data_volume_count).

Debugging

You can debug this API in API Explorer. Automatic authentication is supported. API Explorer can automatically generate sample SDK code and provide the sample SDK code debugging.

URI

POST /v1.1/{project_id}/run-job-flow
Table 2 URI parameters

Parameter

Mandatory

Type

Description

project_id

Yes

String

Explanation

Project ID. For details about how to obtain the project ID, see Obtaining a Project ID.

Constraints

N/A

Value range

The value must consist of 1 to 64 characters. Only letters and digits are allowed.

Default value

N/A

Request Parameters

Table 3 Request parameters

Parameter

Mandatory

Type

Description

cluster_version

Yes

String

Explanation

Cluster version, for example, MRS 3.1.0.

Constraints

N/A

Value range

  • MRS 1.9.2
  • MRS 3.1.0
  • MRS 3.1.2-LTS.3
  • MRS 3.1.5
  • MRS 3.2.0-LTS.1

Default value

N/A

cluster_name

Yes

String

Explanation

Cluster name. It must be unique.

Constraints

N/A

Value range

The value can contain 1 to 64 characters, including only letters, digits, underscores (_), and hyphens (-).

Default value

N/A

master_node_num

No

Integer

Explanation

Number of Master nodes.

Constraints

If cluster HA is enabled, set this parameter to 2. If cluster HA is disabled, set this parameter to 1. This parameter cannot be set to 1 in MRS 3.x.

Value range

N/A

Default value

N/A

core_node_num

No

Integer

Explanation

Number of Core nodes. The default maximum number of core nodes is 500. If more than 500 core nodes are required, apply for a higher quota.

Constraints

N/A

Value range

1-500

Default value

N/A

billing_type

Yes

Integer

Explanation

Cluster billing mode.

Constraints

N/A

Value range

12: The cluster is billed on a pay-per-use basis. Only pay-per-use clusters can be created by calling APIs.

Default value

N/A

data_center

Yes

String

Explanation

The information about the region where the cluster is located. For details, see Endpoints.

Constraints

N/A

Value range

N/A

Default value

N/A

vpc

Yes

String

Explanation

The name of the VPC where the subnet is located. Obtain the VPC name by performing the following operations on the VPC management console:

  1. Log in to the management console.
  2. Choose Virtual Private Cloud > My VPCs.
  3. On the Virtual Private Cloud page, obtain the VPC name from the list.

Constraints

N/A

Value range

N/A

Default value

N/A

master_node_size

No

String

Explanation

Specifications of the node, for example, {ECS_FLAVOR_NAME}.linux.bigdata. {ECS_FLAVOR_NAME} can be c3.4xlare.2 or other flavors that are displayed on the MRS purchase page. The supported host specifications are determined by CPU, memory, and disk space. For details about instance specifications, see ECS Specifications Used by MRS and BMS Specifications Used by MRS. You are advised to obtain the specifications supported by the corresponding version in the corresponding region from the cluster creation page on the MRS console.

Constraints

N/A

Value range

N/A

Default value

N/A

core_node_size

No

String

Explanation

Specifications of the Core node, for example, {ECS_FLAVOR_NAME}.linux.bigdata. {ECS_FLAVOR_NAME} can be c3.4xlare.2 or other flavors that are displayed on the MRS purchase page. For details about instance specifications, see ECS Specifications Used by MRS and BMS Specifications Used by MRS. You are advised to obtain the specifications supported by the corresponding version in the corresponding region from the cluster creation page on the MRS console.

Constraints

N/A

Value range

N/A

Default value

N/A

component_list

Yes

Array of component_list objects

Explanation

The list of service components to be installed. For details about the parameters, see Table 4.

Constraints

N/A

Value range

N/A

Default value

N/A

available_zone_id

Yes

String

Explanation

AZ ID. You can obtain the IDs of some AZ by calling the API for querying AZ information.

Constraints

N/A

Value range

  • CN-Hong Kong AZ1 (ap-southeast-1a): 8902e05a7ee04542a6a73246fddc46b0
  • CN-Hong Kong AZ2 (ap-southeast-1b): e9554f5c6fb84eeeb29ab766436b6454
  • AP-Bangkok AZ1 (ap-southeast-2a): 11d18bfa9d57488b8f96680013667546
  • AP-Bangkok AZ2 (ap-southeast-2b): 09d6a3cddbd643a5aa8837600c9af32c
  • AP-Singapore AZ1 (ap-southeast-3a): 82cc0d8877374316b669613539efd0d9
  • AP-Singapore AZ2 (ap-southeast-3b): 77394a8450e147779666771f796e9f03
  • AP-Singapore AZ3 (ap-southeast-3c): dba8d1bd3d9146659e2b5a38c09b19a4

Default value

N/A

vpc_id

Yes

String

Explanation

The ID of the VPC where the subnet is located.

Obtain the VPC ID by performing the following operations on the VPC management console:

  1. Log in to the management console.
  2. Choose Virtual Private Cloud > My VPCs.
  3. On the Virtual Private Cloud page, obtain the VPC ID from the list.

Constraints

N/A

Value range

N/A

Default value

N/A

subnet_id

Yes

String

Explanation

Subnet ID Obtain the subnet ID by performing the following operations on the VPC management console:

  1. Log in to the management console.
  2. Choose Virtual Private Cloud > My VPCs.
  3. Locate the row that contains the target VPC and click the number in the Subnets column to view the subnet information.
  4. Click the subnet name to obtain the network ID.

Constraints

At least one of subnet_id and subnet_name must be configured. If the two parameters are configured but do not match the same subnet, the cluster fails to create. subnet_id is recommended.

Value range

N/A

Default value

N/A

subnet_name

Yes

String

Explanation

The subnet name.

Obtain the subnet name by performing the following operations on the VPC management console:

  1. Log in to the management console.
  2. Choose Virtual Private Cloud > My VPCs.
  3. Locate the row that contains the target VPC and click the number in the Subnets column to obtain the subnet name.

Constraints

At least one of subnet_id and subnet_name must be configured. If the two parameters are configured but do not match the same subnet, the cluster fails to create. If only subnet_name is configured and subnets with the same name exist in the VPC, the first subnet name in the VPC is used when a cluster is created. subnet_id is recommended.

Value range

N/A

Default value

N/A

security_groups_id

No

String

Explanation

The ID of the security group configured for the cluster.

  • If this parameter is left blank, MRS automatically creates a security group, whose name starts with mrs_{cluster_name}.
  • If this parameter is not left blank, a fixed security group is used to create a cluster. The transferred ID must be the security group ID owned by the current tenant. The security group must include an inbound rule in which all protocols and all ports are allowed and the source is the IP address of the specified node on the management plane.

Constraints

N/A

Value range

N/A

Default value

N/A

add_jobs

No

Array of add_jobs objects

Explanation

Jobs can be submitted when a cluster is created. Currently, only one job can be created. For details about the parameters, see Table 5.

Constraints

There must be no more than 1 record.

Value range

N/A

Default value

N/A

volume_size

No

Integer

Explanation

Data disk storage space of Master and Core nodes, in GB To increase the data storage capacity, you can add disks when creating a cluster. Select a proper disk storage space based on the following application scenarios:

  • Storage-compute decoupling: Data is stored in the OBS system. Costs of clusters are relatively low but computing performance is poor. The clusters can be deleted at any time. It is recommended when data computing is infrequently performed.
  • Storage-compute integration: Data is stored in the HDFS system. Costs of clusters are relatively high but computing performance is good. The clusters cannot be deleted in a short term. It is recommended when data computing is frequently performed.

Constraints

This parameter is not recommended. For details, see the description of the volume_type parameter.

Value range

100-32000

Default value

N/A

volume_type

No

String

Explanation

The data disk storage type of master and core nodes. Currently, SATA, SAS, SSD, and GPSSD are supported. Disk parameters can be represented by volume_type and volume_size, or multi-disk parameters. If the volume_type and volume_size parameters coexist with the multi-disk parameters, the system reads the volume_type and volume_size parameters first. You are advised to use the multi-disk parameters.

Constraints

N/A

Value range

  • SATA: common I/O
  • SAS: high I/O
  • SSD: ultra-high I/O
  • GPSSD: general-purpose SSD

Default value

N/A

master_data_volume_type

No

String

Explanation

This parameter is a multi-disk parameter, indicating the data disk storage type of the master node. Currently, SATA, SAS, SSD, and GPSSD are supported.

Constraints

N/A

Value range

  • SATA: common I/O
  • SAS: high I/O
  • SSD: ultra-high I/O
  • GPSSD: general-purpose SSD

Default value

N/A

master_data_volume_size

No

Integer

Explanation

This parameter is a multi-disk parameter, indicating the data disk storage space of master nodes. To increase the data storage capacity, you can add disks when creating a cluster. You only need to pass in a number without the unit GB.

Constraints

N/A

Value range

100-32000

Default value

N/A

master_data_volume_count

No

Integer

Explanation

This parameter is a multi-disk parameter, indicating the number of data disks of the master nodes.

Constraints

N/A

Value range

The value can only be 1.

Default value

1

core_data_volume_type

No

String

Explanation

This parameter is a multi-disk parameter, indicating the data disk storage type of core nodes. Currently, SATA, SAS, SSD, and GPSSD are supported.

Constraints

N/A

Value range

  • SATA: common I/O
  • SAS: high I/O
  • SSD: ultra-high I/O
  • GPSSD: general-purpose SSD

Default value

N/A

core_data_volume_size

No

Integer

Explanation

This parameter is a multi-disk parameter, indicating the data disk storage space of core nodes. To increase the data storage capacity, you can add disks when creating a cluster. You only need to pass in a number without the unit GB.

Constraints

N/A

Value range

100-32000

Default value

N/A

core_data_volume_count

No

Integer

Explanation

This parameter is a multi-disk parameter, indicating the number of data disks of the core nodes.

Constraints

N/A

Value range

1-20

Default value

N/A

task_node_groups

No

Array of task_node_groups objects

Explanation

The list of task nodes. For details about the parameters, see Table 6.

Constraints

There must be no more than 1 record.

Value range

N/A

Default value

N/A

bootstrap_scripts

No

Array of BootstrapScript objects

Explanation

The Bootstrap action script information. For details about the parameters, see Table 8.

Constraints

N/A

Value range

N/A

Default value

N/A

node_public_cert_name

No

String

Explanation

The name of a key pair. You can use a key pair to log in to a cluster node.

Constraints

If login_mode is set to 1, the request body contains the node_public_cert_name field.

Value range

N/A

Default value

N/A

cluster_admin_secret

No

String

Explanation

Password of the MRS Manager administrator.

Constraints

N/A

Value range

  • Must contain 8 to 26 characters.
  • Cannot be the username or the username spelled backwards.
  • Must contain every type of the following:
    • Lowercase letters
    • Uppercase letters
    • Numbers
    • Special characters (!@$%^-_=+[{}]:,./?)

Default value

N/A

cluster_master_secret

No

String

Explanation

The password of user root for logging in to a cluster node.

Constraints

If login_mode is set to 0, the request body contains the cluster_master_secret field.

Value range

A password must meet the following complexity requirements:

  • Must be 8 to 26 characters long.
  • Must contain every type of the following: uppercase letters, lowercase letters, numbers, and special characters (!@$%^-_=+[{}]:,./?), but must not contain spaces.
  • Cannot be the username or the username spelled backwards.

Default value

N/A

safe_mode

Yes

Integer

Explanation

The running mode of an MRS cluster.

Constraints

N/A

Value range

  • 0: normal cluster. In a normal cluster, Kerberos authentication is disabled, and users can use all functions provided by the cluster.
  • 1: security cluster. In a security cluster, Kerberos authentication is enabled, and common users cannot use the file management and job management functions of an MRS cluster or view cluster resource usage and the job records of Hadoop and Spark. To use more functions, the users must obtain the relevant permissions from the MRS Manager administrator.

Default value

N/A

tags

No

Array of tag objects

Explanation

The cluster tags. For details about the parameters, see Table 9.

Constraints

A maximum of 20 tags can be used in a cluster. The tag name (key) must be unique. The tag key and value can contain letters, digits, spaces, and special characters (_.:=+-@), but cannot start or end with a space or start with _sys_.

Value range

N/A

Default value

N/A

cluster_type

No

Integer

Explanation

The cluster type. Currently, hybrid clusters cannot be created using APIs.

Constraints

N/A

Value range

  • 0: analysis cluster
  • 1: streaming cluster

Default value

0

log_collection

No

Integer

Explanation

Whether to collect logs when cluster creation fails. The default value is 1, indicating that OBS buckets are created only for collecting logs when an MRS cluster fails to create.

Constraints

N/A

Value range

  • 0: Do not collect logs.
  • 1: Collect logs.

Default value

1

enterprise_project_id

No

String

Explanation

Enterprise project ID When you create a cluster, associate the enterprise project ID with the cluster. The default value is 0, indicating the default enterprise project.

To obtain the enterprise project ID, see the id value in the enterprise_project field data structure table in "Querying the Enterprise Project List" in Enterprise Management API Reference.

Constraints

N/A

Value range

N/A

Default value

0

login_mode

No

Integer

Explanation

Cluster login mode.

Constraints

  • If login_mode is set to 0, the request body contains the cluster_master_secret field.
  • If login_mode is set to 1, the request body contains the node_public_cert_name field.

Value range

  • 0: password
  • 1: key pair

Default value

1

node_groups

No

Array of NodeGroupV11 objects

Explanation

List of nodes. For details about the parameters, see Table 10.

Constraints

Configure either this parameter or the following parameters:

master_node_num, master_node_size, core_node_num, core_node_size, master_data_volume_type, master_data_volume_size, master_data_volume_count, core_data_volume_type, core_data_volume_size, core_data_volume_count, volume_type, volume_size, task_node_groups

Value range

N/A

Default value

N/A

Table 4 ComponentAmbV11

Parameter

Mandatory

Type

Description

component_name

Yes

String

Explanation

Component name. For details, see the component information in Table 1.

Constraints

N/A

Value range

The value can contain 1 to 64 characters, including only letters, digits, underscores (_), and hyphens (-).

Default value

N/A

Table 5 AddJobsReqV11

Parameter

Mandatory

Type

Description

job_type

Yes

Integer

Explanation

Job type code.

Constraints

N/A

Value range

  • 1: MapReduce
  • 2: Spark
  • 3: Hive Script
  • 4: HiveQL (not supported currently)
  • 5: DistCp for importing and exporting data (not supported currently)
  • 6: Spark Script
  • 7: Spark SQL for submitting Spark SQL statements (not supported currently)
    NOTE:

    Spark and Hive jobs can be created only in clusters that where Spark and Hive are installed.

Default value

N/A

job_name

Yes

String

Explanation

Job name.

Constraints

N/A

Value range

The value can contain 1 to 64 characters. Only letters, digits, hyphens (-), and underscores (_) are allowed.

NOTE:

Identical job names are allowed but not recommended.

Default value

N/A

jar_path

No

String

Explanation

Path of the .jar file or .sql file to be executed.

Constraints

N/A

Value range

The value must meet the following requirements:

  • The value contains a maximum of 1,023 characters. It cannot contain special characters (;|&>,<'$) and cannot be left blank or all spaces.
  • Files can be stored in HDFS or OBS. The path varies depending on the file system.
    • OBS: The path must start with s3a://. Files or programs encrypted by KMS are not supported.
    • HDFS: The path starts with a slash (/).
  • Spark Script must end with .sql while MapReduce and Spark Jar must end with .jar. sql and jar are case-insensitive.

Default value

N/A

arguments

No

String

Explanation

The key parameter for program execution. The parameter is specified by the function of the user's program. MRS is only responsible for loading the parameter.

Constraints

N/A

Value range

The parameter can contain 0 to 150,000 characters, but special characters (;|&>'<$) are not allowed.

Default value

N/A

input

No

String

Explanation

The data input path.

Files can be stored in HDFS or OBS. The path varies depending on the file system.
  • OBS: The path must start with s3a://. Files or programs encrypted by KMS are not supported.
  • HDFS: The path starts with a slash (/).

Constraints

N/A

Value range

The value can contain 0 to 1,023 characters, but special characters (;|&>'<$) are not allowed.

Default value

N/A

output

No

String

Explanation

The data output path.

Files can be stored in HDFS or OBS. The path varies depending on the file system.
  • OBS: The path must start with s3a://.
  • HDFS: The path starts with a slash (/).

If the specified path does not exist, the system will automatically create it.

Constraints

N/A

Value range

The value can contain 0 to 1,023 characters, but special characters (;|&>'<$) are not allowed.

Default value

N/A

job_log

No

String

Explanation

The path for storing job logs that record job running status.

Files can be stored in HDFS or OBS. The path varies depending on the file system.
  • OBS: The path must start with s3a://.
  • HDFS: The path starts with a slash (/).

Constraints

N/A

Value range

The value can contain 0 to 1,023 characters, but special characters (;|&>'<$) are not allowed.

Default value

N/A

shutdown_cluster

No

Boolean

Explanation

Whether to delete the cluster after the job execution is complete.

Constraints

N/A

Value range

  • true: Delete the cluster after job execution is complete.
  • false: Do not delete the cluster after the job execution is complete.

Default value

N/A

file_action

No

String

Explanation

The action to be performed on a file.

Constraints

N/A

Value range

  • import: Import data.
  • export: Export data.

Default value

N/A

submit_job_once_cluster_run

Yes

Boolean

Explanation

Whether to submit a job when creating a cluster. Set it to true.

Constraints

N/A

Value range

  • true: Submit a job during cluster creation.
  • false: Submit a job after the cluster is created.

Default value

N/A

hql

No

String

Explanation

The HQL script statement.

Constraints

N/A

Value range

N/A

Default value

N/A

hive_script_path

No

String

Explanation

SQL program path. This parameter is required by Spark Script and Hive Script jobs only.

Constraints

N/A

Value range

The value must meet the following requirements:

  • The value contains a maximum of 1,023 characters. It cannot contain special characters (;|&><'$) and cannot be left blank or all spaces.
  • Files can be stored in HDFS or OBS. The path varies depending on the file system.
    • OBS: The path must start with s3a://. Files or programs encrypted by KMS are not supported.
    • HDFS: The path starts with a slash (/).
  • Ends with .sql. sql is case-insensitive.

Default value

N/A

Table 6 TaskNodeGroup

Parameter

Mandatory

Type

Description

node_num

Yes

Integer

Explanation

Number of Task nodes.

Constraints

The total number of Core and Task nodes cannot exceed 500.

Value range

0-500

Default value

N/A

node_size

Yes

String

Explanation

Specifications of the Task node, for example, {ECS_FLAVOR_NAME}.linux.bigdata. {ECS_FLAVOR_NAME} can be c3.4xlare.2 or other flavors that are displayed on the MRS purchase page. For details about instance specifications, see ECS Specifications Used by MRS and BMS Specifications Used by MRS.

Obtain the instance specifications of the corresponding version in the corresponding region from the cluster creation page of the MRS management console.

Constraints

N/A

Value range

N/A

Default value

N/A

data_volume_type

Yes

String

Explanation

Data disk storage type of the Task node. Supported types include SATA, SAS, and SSD.

Constraints

N/A

Value range

  • SATA: common I/O
  • SAS: high I/O
  • SSD: ultra-high I/O
  • GPSSD: general-purpose SSD

Default value

N/A

data_volume_count

Yes

Integer

Explanation

Number of data disks of a Task node.

Constraints

N/A

Value range

0-20

Default value

N/A

data_volume_size

Yes

Integer

Explanation

Data disk storage space of a Task node. You only need to pass in a number without the unit GB.

Constraints

N/A

Value range

100-32000

Default value

N/A

auto_scaling_policy

No

auto_scaling_policy object

Explanation

The auto scaling policy. For details, see Table 7.

Constraints

N/A

Value range

N/A

Default value

N/A

Table 7 AutoScalingPolicy

Parameter

Mandatory

Type

Description

auto_scaling_enable

Yes

Boolean

Explanation

Whether to enable the auto scaling policy.

Constraints

N/A

Value range

  • true: Enable the auto scaling rule.
  • false: Disable the autoscaling rule.

Default value

N/A

min_capacity

Yes

Integer

Explanation

The minimum number of nodes reserved in the node group.

Constraints

N/A

Value range

0-500

Default value

N/A

max_capacity

Yes

Integer

Explanation

The maximum number of nodes in the node group.

Constraints

N/A

Value range

0-500

Default value

N/A

resources_plans

No

Array of resources_plan objects

Explanation

The resource plan list. For details, see Table 11. If this parameter is left blank, the resource plan is disabled.

Constraints

When auto_scaling_enable is set to true, either this parameter or rules must be configured. There must be no more than 5 records.

Value range

N/A

Default value

N/A

exec_scripts

No

Array of scale_script objects

Explanation

The list of custom scaling automation scripts. For details, see Table 14. If this parameter is left blank, a hook script is disabled.

Constraints

The number of records cannot exceed 10.

Value range

N/A

Default value

N/A

rules

No

Array of rules objects

Explanation

The list of auto scaling rules. For details, see Table 12.

Constraints

When auto_scaling_enable is set to true, either this parameter or resources_plans must be configured. The number of records cannot exceed 10.

Value range

N/A

Default value

N/A

Table 8 BootstrapScript

Parameter

Mandatory

Type

Description

name

Yes

String

Explanation

Name of a bootstrap action script.

Constraints

N/A

Value range

The names of bootstrap action scripts in the same cluster must be unique. The value can contain 1 to 64 characters, including only letters, digits, underscores (_), and hyphens (-), and cannot start with a space.

Default value

N/A

uri

Yes

String

Explanation

The path of a Bootstrap action script. Set this parameter to an OBS bucket path or a local VM path.

  • OBS bucket path: Enter a script path. For example, enter the path of the public sample script provided by MRS. Example: s3a://bootstrap/presto/presto-install.sh. If dualroles is installed, the parameter of the presto-install.sh script is dualroles. If worker is installed, the parameter of the presto-install.sh script is worker. Based on the Presto usage habit, you are advised to install dualroles on the active master nodes and worker on the Core nodes.
  • Local VM path: Enter a script path. The script path must start with a slash (/) and end with .sh.

Constraints

N/A

Value range

N/A

Default value

N/A

parameters

No

String

Explanation

The bootstrap action script parameters.

Constraints

N/A

Value range

N/A

Default value

N/A

nodes

Yes

Array of strings

Explanation

The type of a node where the bootstrap action script is executed. The value can be Master, Core, or Task.

Constraints

The node type must be represented in lowercase letters.

Value range

N/A

Default value

N/A

active_master

No

Boolean

Explanation

Whether the bootstrap action script runs only on active master nodes.

Constraints

N/A

Value range

  • true: The bootstrap action script runs only on active Master nodes.
  • false: The bootstrap action script can run on all Master nodes.

Default value

false

before_component_start

No

Boolean

Explanation

Time when the bootstrap action script is executed. Currently, the following two options are available: Before component start and After component start

Constraints

N/A

Value range

  • true: The bootstrap action script is executed before the component starts.
  • false: The bootstrap action script is executed after the component starts.

Default value

false

fail_action

Yes

String

Explanation

Whether to continue executing subsequent scripts and creating a cluster after the Bootstrap action script fails to be executed.

You are advised to set this parameter to continue in the commissioning phase so that the cluster can continue to be installed and started no matter whether the bootstrap action is successful.

Constraints

N/A

Value range

  • continue: Continue to execute subsequent scripts.
  • errorout: Stop the action.

Default value

errorout

start_time

No

Long

Explanation

The execution time of one bootstrap action script.

Constraints

N/A

Value range

N/A

Default value

N/A

state

No

String

Explanation

The running status of one bootstrap action script.

Constraints

N/A

Value range

  • PENDING: The script is suspended.
  • IN_PROGRESS: The script is being processed.
  • SUCCESS
  • FAILURE: The script fails to be executed.

Default value

N/A

action_stages

No

Array of strings

Explanation

Select the time when the bootstrap action script is executed.

  • BEFORE_COMPONENT_FIRST_START: before initial component starts
  • AFTER_COMPONENT_FIRST_START: after initial component starts
  • BEFORE_SCALE_IN: before scale-in
  • AFTER_SCALE_IN: after scale-in
  • BEFORE_SCALE_OUT: before scale-out
  • AFTER_SCALE_OUT: after scale-out

Constraints

N/A

Value range

N/A

Default value

N/A

Table 9 Tag

Parameter

Mandatory

Type

Description

key

Yes

String

Explanation

Tag key.

Constraints

N/A

Value range

  • The value can contain a maximum of 128 characters and cannot be an empty string.
  • The tag key of a resource must be unique.
  • A tag key can contain letters, digits, spaces, and special characters _.:=+-@, but cannot start or end with a space or start with _sys_.

Default value

N/A

value

Yes

String

Explanation

Tag value.

Constraints

N/A

Value range

  • The value can contain a maximum of 255 characters and can be an empty string.
  • The value can contain letters, digits, spaces, and special characters _.:=+-@, but cannot start or end with a space or start with _sys_.

Default value

N/A

Table 10 NodeGroupV11

Parameter

Mandatory

Type

Description

group_name

Yes

String

Explanation

The node group name.

Constraints

N/A

Value range

  • master_node_default_group
  • core_node_analysis_group
  • core_node_streaming_group
  • task_node_analysis_group
  • task_node_streaming_group

Default value

N/A

node_num

Yes

Integer

Explanation

Number of nodes.

Constraints

The total number of Core and Task nodes cannot exceed 500.

Value range

0-500

Default value

N/A

node_size

Yes

String

Explanation

Specifications of the node, for example, {ECS_FLAVOR_NAME}.linux.bigdata. {ECS_FLAVOR_NAME} can be c3.4xlare.2 or other flavors that are displayed on the MRS purchase page. The host specifications supported by MRS are determined by CPU, memory, and disk space. For details about instance specifications, see ECS Specifications Used by MRS and BMS Specifications Used by MRS. You are advised to obtain the specifications supported by the corresponding version in the corresponding region from the cluster creation page on the MRS console.

Constraints

N/A

Value range

N/A

Default value

N/A

root_volume_size

No

String

Explanation

The system disk storage space of a node.

Constraints

N/A

Value range

N/A

Default value

N/A

root_volume_type

No

String

Explanation

System disk storage type of a node. Supported types include SATA, SAS, and SSD.

Constraints

N/A

Value range

  • SATA: common I/O
  • SAS: high I/O
  • SSD: ultra-high I/O
  • GPSSD: general-purpose SSD

Default value

N/A

data_volume_type

No

String

Explanation

Data disk storage type of nodes. Supported types include SATA, SAS, and SSD.

Constraints

N/A

Value range

  • SATA: common I/O
  • SAS: high I/O
  • SSD: ultra-high I/O
  • GPSSD: general-purpose SSD

Default value

N/A

data_volume_count

No

Integer

Explanation

Number of data disks of a node.

Constraints

N/A

Value range

0-20

Default value

N/A

data_volume_size

No

Integer

Explanation

Data disk storage space of a node.

Unit: GB.

Constraints

N/A

Value range

100-32000

Default value

N/A

auto_scaling_policy

No

auto_scaling_policy object

Explanation

The auto scaling policy.

Constraints

The auto scaling rule information. This parameter is available only when group_name is set to task_node_analysis_group or task_node_streaming_group.

For details about the parameters, see Table 7.

Value range

N/A

Default value

N/A

Table 11 ResourcesPlan

Parameter

Mandatory

Type

Description

period_type

Yes

String

Explanation

Cycle type of a resource plan. This parameter can be set to daily only.

Constraints

N/A

Value range

N/A

Default value

N/A

start_time

Yes

String

Explanation

The start time of a resource plan. The value is in the format of hour:minute, indicating that the time ranges from 00:00 to 23:59.

Constraints

N/A

Value range

N/A

Default value

N/A

end_time

Yes

String

Explanation

End time of a resource plan. The format is the same as that of start_time.

Constraints

The value cannot be earlier than the start_time, and the interval between start_time and start_time cannot be less than 30 minutes.

Value range

N/A

Default value

N/A

min_capacity

Yes

Integer

Explanation

Minimum number of the preserved nodes in a node group in a resource plan.

Constraints

N/A

Value range

0-500

Default value

N/A

max_capacity

Yes

Integer

Explanation

Maximum number of the preserved nodes in a node group in a resource plan.

Constraints

N/A

Value range

0-500

Default value

N/A

Table 12 Rule

Parameter

Mandatory

Type

Description

name

Yes

String

Explanation

Name of an auto scaling rule.

Constraints

N/A

Value range

It contains only 1 to 64 characters. Only letters, digits, hyphens (-), and underscores (_) are allowed.

Rule names must be unique in a node group.

Default value

N/A

description

No

String

Explanation

Description about an auto scaling rule.

Constraints

N/A

Value range

It contains a maximum of 1024 characters.

Default value

N/A

adjustment_type

Yes

String

Explanation

Auto scaling rule adjustment type.

Constraints

N/A

Value range

  • scale_out: cluster scale-out
  • scale_in: cluster scale-in

Default value

N/A

cool_down_minutes

Yes

Integer

Explanation

Cluster cooling time after an auto scaling rule is triggered, when no auto scaling operation is performed. The unit is minute.

Constraints

N/A

Value range

The value ranges from 0 to 10080. 10080 indicates the number of minutes in a week.

Default value

N/A

scaling_adjustment

Yes

Integer

Explanation

Number of nodes that can be adjusted once.

Constraints

N/A

Value range

1-100

Default value

N/A

trigger

Yes

trigger object

Explanation

Condition for triggering a rule. For details, see Table 13.

Constraints

N/A

Value range

N/A

Default value

N/A

Table 13 Trigger

Parameter

Mandatory

Type

Description

metric_name

Yes

String

Explanation

Metric name. This triggering condition makes a judgment according to the value of the metric.

For details about metric names, see Configuring Auto Scaling for an MRS Cluster.

Constraints

N/A

Value range

A metric name contains a maximum of 64 characters.

Default value

N/A

metric_value

Yes

String

Explanation

Metric threshold to trigger a rule The value must be an integer or a number with two decimal places.

Constraints

N/A

Value range

Only integers or numbers with two decimal places are allowed.

Default value

N/A

comparison_operator

No

String

Explanation

Metric judgment logic operator.

Constraints

N/A

Value range

  • LT: less than
  • GT: greater than
  • LTOE: less than or equal to
  • GTOE: greater than or equal to

Default value

N/A

evaluation_periods

Yes

Integer

Explanation

The number of consecutive five-minute periods, during which a metric threshold is reached.

Constraints

N/A

Value range

1-200

Default value

N/A

Table 14 ScaleScript

Parameter

Mandatory

Type

Description

name

Yes

String

Explanation

Names of custom scaling automation scripts.

Constraints

N/A

Value range

The names in the same cluster must be unique. The value can contain 1 to 64 characters, including only digits, letters, spaces, hyphens (-), and underscores (_), and cannot start with a space.

Default value

N/A

uri

Yes

String

Explanation

Path of a custom automation script. Set this parameter to an OBS bucket path or a local VM path.

  • OBS bucket path: Enter a script path manually, for example, s3a://XXX/scale.sh.
  • Local VM path: Enter a script path. The script path must start with a slash (/) and end with .sh.

Constraints

N/A

Value range

N/A

Default value

N/A

parameters

No

String

Explanation

Parameters of a custom automation script. Separate multiple parameters by spaces. The following predefined parameters can be transferred:

  • ${mrs_scale_node_num}: Number of the nodes to be added or removed
  • ${mrs_scale_type}: Scaling type. The value can be scale_out or scale_in.
  • ${mrs_scale_node_hostnames}: Host names of the nodes to be added or removed
  • ${mrs_scale_node_ips}: IP addresses of the nodes to be added or removed
  • ${mrs_scale_rule_name}: Name of the rule that triggers auto scaling

Other user-defined parameters are used in the same way as those of common shell scripts. Parameters are separated by space.

Constraints

N/A

Value range

N/A

Default value

N/A

nodes

Yes

Array of string

Explanation

Type of a node where the custom automation script is executed. The node type can be Master, Core, or Task.

Constraints

N/A

Value range

N/A

Default value

N/A

active_master

No

Boolean

Explanation

Whether the custom automation script runs only on the active master node.

Constraints

N/A

Value range

  • true: The custom automation script runs only on the active Master nodes.
  • false: The custom automation script can run on all Master nodes.

Default value

false

action_stage

Yes

String

Explanation

Time when a script is executed.

Constraints

N/A

Value range

  • before_scale_out: before scale-out
  • before_scale_in: before scale-in
  • after_scale_out: after scale-out
  • after_scale_in: after scale-in

Default value

N/A

fail_action

Yes

String

Explanation

Whether to continue to execute subsequent scripts and create a cluster after the custom automation script fails to be executed. You are advised to set this parameter to continue in the commissioning phase so the cluster can continue to be installed and started no matter whether the custom automation script is executed successfully. The scale-in operation cannot be undone. fail_action must be set to continue for the scripts that are executed after scale-in.

Constraints

N/A

Value range

  • continue: Continue to execute subsequent scripts.
  • errorout: Stop the action.

Default value

N/A

Response Parameters

Status code: 200

Table 15 Response body parameters

Parameter

Type

Description

cluster_id

String

Explanation

Cluster ID, which is returned by the system after the cluster is created.

Constraints

N/A

Value range

N/A

Default value

N/A

result

Boolean

Explanation

Operation result

Constraints

N/A

Value range

  • true: The operation is successful.
  • false: The operation failed.

Default value

N/A

msg

String

Explanation

System message, which can be empty.

Constraints

N/A

Value range

N/A

Default value

N/A

Example Request

  • Use the node_groups parameter group to create a cluster with the HA function enabled. The cluster version is MRS 3.2.0-LTS.1.
    POST https://{endpoint}/v1.1/{project_id}/run-job-flow
    
    {
      "billing_type" : 12,
      "data_center" : "",
      "available_zone_id" : "0e7a368b6c54493e94ad32666b47e23e",
      "cluster_name" : "mrs_HEbK",
      "cluster_version" : "MRS 3.2.0-LTS.1",
      "safe_mode" : 0,
      "cluster_type" : 0,
      "component_list" : [ {
        "component_name" : "Hadoop"
      }, {
        "component_name" : "Spark2x"
      }, {
        "component_name" : "HBase"
      }, {
        "component_name" : "Hive"
      }, {
        "component_name" : "Zookeeper"
      }, {
        "component_name" : "Tez"
      }, {
        "component_name" : "Hue"
      }, {
        "component_name" : "Loader"
      }, {
        "component_name" : "Flink"
      } ],
      "vpc" : "vpc-4b1c",
      "vpc_id" : "4a365717-67be-4f33-80c5-98e98a813af8",
      "subnet_id" : "67984709-e15e-4e86-9886-d76712d4e00a",
      "subnet_name" : "subnet-4b44",
      "security_groups_id" : "4820eace-66ad-4f2c-8d46-cf340e3029dd",
      "enterprise_project_id" : "0",
      "tags" : [ {
        "key" : "key1",
        "value" : "value1"
      }, {
        "key" : "key2",
        "value" : "value2"
      } ],
      "node_groups" : [ {
        "group_name" : "master_node_default_group",
        "node_num" : 2,
        "node_size" : "s3.xlarge.2.linux.bigdata",
        "root_volume_size" : 480,
        "root_volume_type" : "SATA",
        "data_volume_type" : "SATA",
        "data_volume_count" : 1,
        "data_volume_size" : 600
      }, {
        "group_name" : "core_node_analysis_group",
        "node_num" : 3,
        "node_size" : "s3.xlarge.2.linux.bigdata",
        "root_volume_size" : 480,
        "root_volume_type" : "SATA",
        "data_volume_type" : "SATA",
        "data_volume_count" : 1,
        "data_volume_size" : 600
      }, {
        "group_name" : "task_node_analysis_group",
        "node_num" : 2,
        "node_size" : "s3.xlarge.2.linux.bigdata",
        "root_volume_size" : 480,
        "root_volume_type" : "SATA",
        "data_volume_type" : "SATA",
        "data_volume_count" : 0,
        "data_volume_size" : 600,
        "auto_scaling_policy" : {
          "auto_scaling_enable" : true,
          "min_capacity" : 1,
          "max_capacity" : "3",
          "resources_plans" : [ {
            "period_type" : "daily",
            "start_time" : "9:50",
            "end_time" : "10:20",
            "min_capacity" : 2,
            "max_capacity" : 3
          }, {
            "period_type" : "daily",
            "start_time" : "10:20",
            "end_time" : "12:30",
            "min_capacity" : 0,
            "max_capacity" : 2
          } ],
          "exec_scripts" : [ {
            "name" : "before_scale_out",
            "uri" : "s3a://XXX/zeppelin_install.sh",
            "parameters" : "${mrs_scale_node_num} ${mrs_scale_type} xxx",
            "nodes" : [ "master", "core", "task" ],
            "active_master" : "true",
            "action_stage" : "before_scale_out",
            "fail_action" : "continue"
          }, {
            "name" : "after_scale_out",
            "uri" : "s3a://XXX/storm_rebalance.sh",
            "parameters" : "${mrs_scale_node_hostnames} ${mrs_scale_node_ips}",
            "nodes" : [ "master", "core", "task" ],
            "active_master" : "true",
            "action_stage" : "after_scale_out",
            "fail_action" : "continue"
          } ],
          "rules" : [ {
            "name" : "default-expand-1",
            "adjustment_type" : "scale_out",
            "cool_down_minutes" : 5,
            "scaling_adjustment" : 1,
            "trigger" : {
              "metric_name" : "YARNMemoryAvailablePercentage",
              "metric_value" : "25",
              "comparison_operator" : "LT",
              "evaluation_periods" : 10
            }
          }, {
            "name" : "default-shrink-1",
            "adjustment_type" : "scale_in",
            "cool_down_minutes" : 5,
            "scaling_adjustment" : 1,
            "trigger" : {
              "metric_name" : "YARNMemoryAvailablePercentage",
              "metric_value" : "70",
              "comparison_operator" : "GT",
              "evaluation_periods" : 10
            }
          } ]
        }
      } ],
      "login_mode" : 1,
      "cluster_master_secret" : "",
      "cluster_admin_secret" : "",
      "log_collection" : 1,
      "add_jobs" : [ {
        "job_type" : 1,
        "job_name" : "tenji111",
        "jar_path" : "s3a://bigdata/program/hadoop-mapreduce-examples-2.7.2.jar",
        "arguments" : "wordcount",
        "input" : "s3a://bigdata/input/wd_1k/",
        "output" : "s3a://bigdata/ouput/",
        "job_log" : "s3a://bigdata/log/",
        "shutdown_cluster" : true,
        "file_action" : "",
        "submit_job_once_cluster_run" : true,
        "hql" : "",
        "hive_script_path" : ""
      } ],
      "bootstrap_scripts" : [ {
        "name" : "Modify os config",
        "uri" : "s3a://XXX/modify_os_config.sh",
        "parameters" : "param1 param2",
        "nodes" : [ "master", "core", "task" ],
        "active_master" : "false",
        "before_component_start" : "true",
        "start_time" : "1667892101",
        "state" : "IN_PROGRESS",
        "fail_action" : "continue",
        "action_stages" : [ "BEFORE_COMPONENT_FIRST_START", "BEFORE_SCALE_IN" ]
      }, {
        "name" : "Install zepplin",
        "uri" : "s3a://XXX/zeppelin_install.sh",
        "parameters" : "",
        "nodes" : [ "master" ],
        "active_master" : "true",
        "before_component_start" : "false",
        "start_time" : "1667892101",
        "state" : "IN_PROGRESS",
        "fail_action" : "continue",
        "action_stages" : [ "AFTER_SCALE_IN", "AFTER_SCALE_OUT" ]
      } ]
    }
  • Create a cluster with the HA function enabled without using the node_groups parameter group. The cluster version is MRS 3.2.0-LTS.1.
    POST https://{endpoint}/v1.1/{project_id}/run-job-flow
    
    {
      "billing_type" : 12,
      "data_center" : "",
      "master_node_num" : 2,
      "master_node_size" : "s3.2xlarge.2.linux.bigdata",
      "core_node_num" : 3,
      "core_node_size" : "s3.2xlarge.2.linux.bigdata",
      "available_zone_id" : "0e7a368b6c54493e94ad32666b47e23e", 
      "cluster_name" : "newcluster",
      "vpc" : "vpc1",
      "vpc_id" : "5b7db34d-3534-4a6e-ac94-023cd36aaf74",
      "subnet_id" : "815bece0-fd22-4b65-8a6e-15788c99ee43",
      "subnet_name" : "subnet",
      "security_groups_id" : "845bece1-fd22-4b45-7a6e-14338c99ee43",
      "tags" : [ {
        "key" : "key1",
        "value" : "value1"
      }, {
        "key" : "key2",
        "value" : "value2"
      } ],
      "cluster_version" : "MRS 3.2.0-LTS.1",
      "cluster_type" : 0,
      "master_data_volume_type" : "SATA",
      "master_data_volume_size" : 600,
      "master_data_volume_count" : 1,
      "core_data_volume_type" : "SATA",
      "core_data_volume_size" : 600,
      "core_data_volume_count" : 2,
      "node_public_cert_name" : "SSHkey-bba1",
      "safe_mode" : 0,
      "log_collection" : 1,
      "task_node_groups" : [ {
        "node_num" : 2,
        "node_size" : "s3.xlarge.2.linux.bigdata",
        "data_volume_type" : "SATA",
        "data_volume_count" : 1,
        "data_volume_size" : 600,
        "auto_scaling_policy" : {
          "auto_scaling_enable" : true,
          "min_capacity" : 1,
          "max_capacity" : "3",
          "resources_plans" : [ {
            "period_type" : "daily",
            "start_time" : "9: 50",
            "end_time" : "10: 20",
            "min_capacity" : 2,
            "max_capacity" : 3
          }, {
            "period_type" : "daily",
            "start_time" : "10: 20",
            "end_time" : "12: 30",
            "min_capacity" : 0,
            "max_capacity" : 2
          } ],
          "exec_scripts" : [ {
            "name" : "before_scale_out",
            "uri" : "s3a: //XXX/zeppelin_install.sh",
            "parameters" : "${mrs_scale_node_num}${mrs_scale_type}xxx",
            "nodes" : [ "master", "core", "task" ],
            "active_master" : "true",
            "action_stage" : "before_scale_out",
            "fail_action" : "continue"
          }, {
            "name" : "after_scale_out",
            "uri" : "s3a: //XXX/storm_rebalance.sh",
            "parameters" : "${mrs_scale_node_hostnames}${mrs_scale_node_ips}",
            "nodes" : [ "master", "core", "task" ],
            "active_master" : "true",
            "action_stage" : "after_scale_out",
            "fail_action" : "continue"
          } ],
          "rules" : [ {
            "name" : "default-expand-1",
            "adjustment_type" : "scale_out",
            "cool_down_minutes" : 5,
            "scaling_adjustment" : 1,
            "trigger" : {
              "metric_name" : "YARNMemoryAvailablePercentage",
              "metric_value" : "25",
              "comparison_operator" : "LT",
              "evaluation_periods" : 10
            }
          }, {
            "name" : "default-shrink-1",
            "adjustment_type" : "scale_in",
            "cool_down_minutes" : 5,
            "scaling_adjustment" : 1,
            "trigger" : {
              "metric_name" : "YARNMemoryAvailablePercentage",
              "metric_value" : "70",
              "comparison_operator" : "GT",
              "evaluation_periods" : 10
            }
          } ]
        }
      } ],
      "component_list" : [ {
        "component_name" : "Hadoop"
      }, {
        "component_name" : "Spark"
      }, {
        "component_name" : "HBase"
      }, {
        "component_name" : "Hive"
      } ],
      "add_jobs" : [ {
        "job_type" : 1,
        "job_name" : "tenji111",
        "jar_path" : "s3a: //bigdata/program/hadoop-mapreduce-examples-2.7.2.jar",
        "arguments" : "wordcount",
        "input" : "s3a: //bigdata/input/wd_1k/",
        "output" : "s3a: //bigdata/ouput/",
        "job_log" : "s3a: //bigdata/log/",
        "shutdown_cluster" : true,
        "file_action" : "",
        "submit_job_once_cluster_run" : true,
        "hql" : "",
        "hive_script_path" : ""
      } ],
      "bootstrap_scripts" : [ {
        "name" : "Modifyosconfig",
        "uri" : "s3a: //XXX/modify_os_config.sh",
        "parameters" : "param1param2",
        "nodes" : [ "master", "core", "task" ],
        "active_master" : "false",
        "before_component_start" : "true",
        "start_time" : "1667892101",
        "state" : "IN_PROGRESS",
        "fail_action" : "continue",
        "action_stages" : [ "BEFORE_COMPONENT_FIRST_START", "BEFORE_SCALE_IN" ]
      }, {
        "name" : "Installzepplin",
        "uri" : "s3a: //XXX/zeppelin_install.sh",
        "parameters" : "",
        "nodes" : [ "master" ],
        "active_master" : "true",
        "before_component_start" : "false",
        "start_time" : "1667892101",
        "state" : "IN_PROGRESS",
        "fail_action" : "continue",
        "action_stages" : [ "AFTER_SCALE_IN", "AFTER_SCALE_OUT" ]
      } ]
    }
  • Use the node_groups parameter group to create a cluster with the HA function disabled. The cluster version is MRS 3.2.0-LTS.1.
    POST https://{endpoint}/v1.1/{project_id}/run-job-flow
    
    {
      "billing_type" : 12,
      "data_center" : "",
      "available_zone_id" : "0e7a368b6c54493e94ad32666b47e23e",
      "cluster_name" : "mrs_HEbK",
      "cluster_version" : "MRS 3.2.0-LTS.1",
      "safe_mode" : 0,
      "cluster_type" : 0,
      "component_list" : [ {
        "component_name" : "Hadoop"
      }, {
        "component_name" : "Spark2x"
      }, {
        "component_name" : "HBase"
      }, {
        "component_name" : "Hive"
      }, {
        "component_name" : "Zookeeper"
      }, {
        "component_name" : "Tez"
      }, {
        "component_name" : "Hue"
      }, {
        "component_name" : "Loader"
      }, {
        "component_name" : "Flink"
      } ],
      "vpc" : "vpc-4b1c",
      "vpc_id" : "4a365717-67be-4f33-80c5-98e98a813af8",
      "subnet_id" : "67984709-e15e-4e86-9886-d76712d4e00a",
      "subnet_name" : "subnet-4b44",
      "security_groups_id" : "4820eace-66ad-4f2c-8d46-cf340e3029dd",
      "enterprise_project_id" : "0",
      "tags" : [ {
        "key" : "key1",
        "value" : "value1"
      }, {
        "key" : "key2",
        "value" : "value2"
      } ],
      "node_groups" : [ {
        "group_name" : "master_node_default_group",
        "node_num" : 1,
        "node_size" : "s3.xlarge.2.linux.bigdata",
        "root_volume_size" : 480,
        "root_volume_type" : "SATA",
        "data_volume_type" : "SATA",
        "data_volume_count" : 1,
        "data_volume_size" : 600
      }, {
        "group_name" : "core_node_analysis_group",
        "node_num" : 1,
        "node_size" : "s3.xlarge.2.linux.bigdata",
        "root_volume_size" : 480,
        "root_volume_type" : "SATA",
        "data_volume_type" : "SATA",
        "data_volume_count" : 1,
        "data_volume_size" : 600
      } ],
      "login_mode" : 1,
      "cluster_master_secret" : "",
      "cluster_admin_secret" : "",
      "log_collection" : 1,
      "add_jobs" : [ {
        "job_type" : 1,
        "job_name" : "tenji111",
        "jar_path" : "s3a://bigdata/program/hadoop-mapreduce-examples-2.7.2.jar",
        "arguments" : "wordcount",
        "input" : "s3a://bigdata/input/wd_1k/",
        "output" : "s3a://bigdata/ouput/",
        "job_log" : "s3a://bigdata/log/",
        "shutdown_cluster" : true,
        "file_action" : "",
        "submit_job_once_cluster_run" : true,
        "hql" : "",
        "hive_script_path" : ""
      } ],
      "bootstrap_scripts" : [ {
        "name" : "Modify os config",
        "uri" : "s3a://XXX/modify_os_config.sh",
        "parameters" : "param1 param2",
        "nodes" : [ "master", "core", "task" ],
        "active_master" : "false",
        "before_component_start" : "true",
        "start_time" : "1667892101",
        "state" : "IN_PROGRESS",
        "fail_action" : "continue",
        "action_stages" : [ "BEFORE_COMPONENT_FIRST_START", "BEFORE_SCALE_IN" ]
      }, {
        "name" : "Install zepplin",
        "uri" : "s3a://XXX/zeppelin_install.sh",
        "parameters" : "",
        "nodes" : [ "master" ],
        "active_master" : "true",
        "before_component_start" : "false",
        "start_time" : "1667892101",
        "state" : "IN_PROGRESS",
        "fail_action" : "continue",
        "action_stages" : [ "AFTER_SCALE_IN", "AFTER_SCALE_OUT" ]
      } ]
    }
  • Create a cluster with the HA function disabled without using the node_groups parameter group. The cluster version is MRS 3.2.0-LTS.1.
    POST https://{endpoint}/v1.1/{project_id}/run-job-flow
    
    {
      "billing_type" : 12,
      "data_center" : "",
      "master_node_num" : 1,
      "master_node_size" : "s3.2xlarge.2.linux.bigdata",
      "core_node_num" : 1,
      "core_node_size" : "s3.2xlarge.2.linux.bigdata", 
      "available_zone_id" : "0e7a368b6c54493e94ad32666b47e23e", 
      "cluster_name" : "newcluster",
      "vpc" : "vpc1",
      "vpc_id" : "5b7db34d-3534-4a6e-ac94-023cd36aaf74",
      "subnet_id" : "815bece0-fd22-4b65-8a6e-15788c99ee43",
      "subnet_name" : "subnet",
      "security_groups_id" : "",
      "enterprise_project_id" : "0",
      "tags" : [ {
        "key" : "key1",
        "value" : "value1"
      }, {
        "key" : "key2",
        "value" : "value2"
      } ],
      "cluster_version" : "MRS 3.2.0-LTS.1",
      "cluster_type" : 0,
      "master_data_volume_type" : "SATA",
      "master_data_volume_size" : 600,
      "master_data_volume_count" : 1,
      "core_data_volume_type" : "SATA",
      "core_data_volume_size" : 600,
      "core_data_volume_count" : 1,
      "login_mode" : 1,
      "node_public_cert_name" : "SSHkey-bba1",
      "safe_mode" : 0,
      "cluster_admin_secret" : "******",
      "log_collection" : 1,
      "component_list" : [ {
        "component_name" : "Hadoop"
      }, {
        "component_name" : "Spark2x"
      }, {
        "component_name" : "HBase"
      }, {
        "component_name" : "Hive"
      }, {
        "component_name" : "Zookeeper"
      }, {
        "component_name" : "Tez"
      }, {
        "component_name" : "Hue"
      }, {
        "component_name" : "Loader"
      }, {
        "component_name" : "Flink"
      } ],
      "add_jobs" : [ {
        "job_type" : 1,
        "job_name" : "tenji111",
        "jar_path" : "s3a://bigdata/program/hadoop-mapreduce-examples-XXX.jar",
        "arguments" : "wordcount",
        "input" : "s3a://bigdata/input/wd_1k/",
        "output" : "s3a://bigdata/ouput/",
        "job_log" : "s3a://bigdata/log/",
        "shutdown_cluster" : false,
        "file_action" : "",
        "submit_job_once_cluster_run" : true,
        "hql" : "",
        "hive_script_path" : ""
      } ],
      "bootstrap_scripts" : [ {
        "name" : "Install zepplin",
        "uri" : "s3a://XXX/zeppelin_install.sh",
        "parameters" : "",
        "nodes" : [ "master" ],
        "active_master" : "false",
        "before_component_start" : "false",
        "start_time" : "1667892101",
        "state" : "IN_PROGRESS",
        "fail_action" : "continue",
        "action_stages" : [ "AFTER_SCALE_IN", "AFTER_SCALE_OUT" ]
      } ]
    }

Example Response

Status code: 200

The cluster is created.

{
  "cluster_id" : "da1592c2-bb7e-468d-9ac9-83246e95447a",
  "result" : true,
  "msg" : ""
}

Status Codes

Table 16 describes the status code.

Table 16 Status code

Status Code

Description

200

The cluster has been created.

See Status Codes.

Error Codes

See Error Codes.