Updated on 2024-01-17 GMT+08:00

Creating a Cluster

Function

This API is used to create an MRS cluster.

Before using the API, you need to obtain the resources listed in Table 1.

Table 1 Obtaining resources

Resource

How to Obtain

VPC

See operation instructions in Querying VPCs and Creating a VPC in the VPC API Reference.

Subnet

See operation instructions in Querying Subnets and Creating a Subnet in the VPC API Reference.

Key Pair

See operation instructions in Querying SSH Key Pairs and Creating and Importing an SSH Key Pair in the ECS API Reference.

Zone

See Endpoints for details about regions and AZs.

Version

Currently, MRS 1.9.2, 3.1.0, 3.1.5, 3.1.2-LTS.3, and 3.2.0-LTS.1 are supported.

Component

  • MRS 3.2.0-LTS.1 supports the following components:
    • An analysis cluster contains the following components: Hadoop, Spark2x, HBase, Hive, Hue, Loader, Flink, Oozie, ZooKeeper, HetuEngine, Ranger, Tez, and Guardian.
    • A streaming cluster contains the following components: Kafka, Flume, ZooKeeper, and Ranger.
    • A hybrid cluster contains the following components: Hadoop, Spark2x, HBase, Hive, Hue, Loader, Flink, Oozie, ZooKeeper, HetuEngine, Ranger, Tez, Kafka, Flume, and Guardian.
    • A custom cluster contains the following components: CDL, Hadoop, Spark2x, HBase, Hive, Hue, IoTDB, Loader, Kafka, Flume, Flink, Oozie, ZooKeeper, HetuEngine, Ranger, Tez, and ClickHouse, and Guardian.
  • MRS 3.1.5 supports the following components:
    • An analysis cluster contains the following components: Hadoop, Spark2x, HBase, Hive, Hue, Flink, Oozie, ZooKeeper, Ranger, Tez, Impala, Presto, Kudu, Sqoop, and Guardian
    • A streaming cluster contains the following components: Kafka, Flume, ZooKeeper, and Ranger.
    • A hybrid cluster contains the following components: Hadoop, Spark2x, HBase, Hive, Hue, Flink, Oozie, ZooKeeper, Ranger, Tez, Impala, Presto, Kudu, Sqoop, Guardian, Kafka, and Flume.
    • A custom cluster contains the following components: Hadoop, Spark2x, HBase, Hive, Hue, Kafka, Flume, Flink, Oozie, ZooKeeper, Ranger, Tez, Impala, Presto, ClickHouse, Kudu, Sqoop, and Guardian.
  • MRS 3.1.2-LTS.3 supports the following components:
    • An analysis cluster contains the following components: Hadoop, Spark2x, HBase, Hive, Hue, Loader, Flink, Oozie, ZooKeeper, HetuEngine, Ranger, and Tez.
    • A streaming cluster contains the following components: Kafka, Flume, ZooKeeper, and Ranger.
    • A hybrid cluster contains the following components: Hadoop, Spark2x, HBase, Hive, Hue, Loader, Flink, Oozie, ZooKeeper, HetuEngine, Ranger, Tez, Kafka, and Flume.
    • A custom cluster contains the following components: Hadoop, Spark2x, HBase, Hive, Hue, Loader, Kafka, Flume, Flink, Oozie, ZooKeeper, HetuEngine, Ranger, Tez, and ClickHouse.
  • MRS 3.1.0 supports the following components:
    • An analysis cluster contains the following components: Hadoop, Spark2x, HBase, Hive, Hue, Flink, Oozie, ZooKeeper, Ranger, Tez, Impala, Presto, and Kudu.
    • A streaming cluster contains the following components: Kafka, Flume, ZooKeeper, and Ranger.
    • A hybrid cluster contains the following components: Hadoop, Spark2x, HBase, Hive, Hue, Flink, Oozie, ZooKeeper, Ranger, Tez, Impala, Presto, Kudu, Kafka, and Flume.
    • A custom cluster contains the following components: Hadoop, Spark2x, HBase, Hive, Hue, Kafka, Flume, Flink, Oozie, ZooKeeper, Ranger, Tez, Impala, Presto, ClickHouse, and Kudu.
  • MRS 3.0.5 supports the following components:
    • An analysis cluster contains the following components: Hadoop, Spark2x, HBase, Hive, Hue, Loader, Flink, Oozie, ZooKeeper, Ranger, Tez, Impala, Presto, Kudu, and Alluxio.
    • A streaming cluster contains the following components: Kafka, Storm, Flume, ZooKeeper, and Ranger.
    • A hybrid cluster contains the following components: Hadoop, Spark2x, HBase, Hive, Hue, Loader, Flink, Oozie, ZooKeeper, Ranger, Tez, Impala, Presto, Kudu, Alluxio, Kafka, Storm, and Flume.
    • A custom cluster contains the following components: Hadoop, Spark2x, HBase, Hive, Hue, Loader, Kafka, Storm, Flume, Flink, Oozie, ZooKeeper, Ranger, Tez, Impala, Presto, ClickHouse, Kudu, and Alluxio.
  • MRS 2.1.0 supports the following components:
    • An analysis cluster contains the following components: Presto, Hadoop, Spark, HBase, Hive, Hue, Loader, Tez, Impala, Kudu, and Flink.
    • A streaming cluster contains the following components: Kafka, Storm, and Flume.
  • MRS 1.9.2 supports the following components:
    • An analysis cluster contains the following components: Presto, Hadoop, Spark, HBase, OpenTSDB, Hive, Hue, Loader, Tez, Flink, Alluxio, and Ranger.
    • A streaming cluster contains the following components: Kafka, KafkaManager, Storm, and Flume.

Constraints

None

Debugging

You can debug this API through automatic authentication in API Explorer. API Explorer can automatically generate sample SDK code and provide the sample SDK code debugging.

URI

POST /v2/{project_id}/clusters
Table 2 URI parameter

Parameter

Mandatory

Type

Description

project_id

Yes

String

Project ID. For details about how to obtain the project ID, see Obtaining a Project ID.

Request Parameters

Table 3 Request body parameters

Parameter

Mandatory

Type

Description

cluster_version

Yes

String

Cluster version. Possible values:

  • MRS 1.9.2
  • MRS 3.1.0
  • MRS 3.1.2-LTS.3
  • MRS 3.1.5
  • MRS 3.2.0-LTS.1

cluster_name

Yes

String

Cluster name. It must be unique.

A cluster name can contain only 1 to 64 characters. Only letters, numbers, hyphens (-), and underscores (_) are allowed.

cluster_type

Yes

String

Cluster type. Possible values:

  • ANALYSIS: analysis cluster
  • STREAMING: streaming cluster
  • MIXED: hybrid cluster
  • CUSTOM: custom cluster, which is supported only by MRS 3.x.

charge_info

No

object

The billing type. For details, see Table 5.

region

Yes

String

Information about the region where the cluster is located. For details, see Endpoints.

is_dec_project

No

Boolean

Whether the cluster is specific for the DeC. The default value is false.

vpc_name

Yes

String

Name of the VPC where the subnet locates. Perform the following operations to obtain the VPC name from the VPC management console:

  1. Log in to the management console.
  2. Choose Virtual Private Cloud > My VPCs. On the Virtual Private Cloud page, obtain the VPC name from the list.

subnet_id

No

String

Subnet ID, which can be obtained by performing the following operations on the VPC management console:

  1. Log in to the VPC management console.
  2. Choose Virtual Private Cloud > My VPCs.
  3. Locate the row that contains the target VPC and click the number in the Subnets column to view the subnet information.
  4. Click the subnet name to obtain the network ID. At least one of subnet_id and subnet_name must be configured. If the two parameters are configured but do not match the same subnet, the cluster fails to create. subnet_id is recommended.

subnet_name

Yes

String

Subnet name. Perform the following operations to obtain the subnet name from the VPC management console:

  1. Log in to the management console.
  2. Choose Virtual Private Cloud > My VPCs.
  3. Locate the row that contains the target VPC and click the number in the Subnets column to obtain the subnet name. At least one of subnet_id and subnet_name must be configured. If the two parameters are configured but do not match the same subnet, the cluster fails to create. If only subnet_name is configured and subnets with the same name exist in the VPC, the first subnet name in the VPC is used when a cluster is created. subnet_id is recommended.

components

Yes

String

List of component names, which are separated by commas (,). For details about the component names, see the component list of each version in Table 1.

external_datasources

No

Array of ClusterDataConnectorMap objects

When deploying components such as Hive and Ranger, you can associate data connections and store metadata in associated databases. For details about the parameters, see Table 4.

availability_zone

Yes

String

AZ name. Multi-AZ clusters are not supported.

See Endpoints for details about AZs.

security_groups_id

No

String

Security group ID of the cluster.

  • If this parameter is left blank, MRS automatically creates a security group, whose name starts with mrs_{cluster_name}.
  • If this parameter is not left blank, a fixed security group is used to create a cluster. The transferred ID must be the security group ID owned by the current tenant. The security group must include an inbound rule in which all protocols and all ports are allowed and the source is the IP address of the specified node on the management plane.

auto_create_default_security_group

No

Boolean

Whether to create the default security group for the MRS cluster. The default value is false. If this parameter is set to true, the default security group will be created for the cluster regardless of whether security_groups_id is specified.

safe_mode

Yes

String

Running mode of an MRS cluster.

  • SIMPLE: normal cluster. In a normal cluster, Kerberos authentication is disabled, and users can use all functions provided by the cluster.
  • KERBEROS: security cluster. In a security cluster, Kerberos authentication is enabled, and common users cannot use the file management and job management functions of an MRS cluster or view cluster resource usage and the job records of Hadoop and Spark. To use more functions, the users must obtain the relevant permissions from the Manager administrator.

manager_admin_password

Yes

String

Password of the MRS Manager administrator. The password must meet the following requirements:
  • Must contain 8 to 26 characters.
  • Must contain at least four of the following: uppercase letters, lowercase letters, digits, and special characters (!@$%^-_=+[{}]:,./?), but must not contain spaces.
  • Cannot be the username or the username spelled backwards.

login_mode

Yes

String

Node login mode.

  • PASSWORD: password-based login. If this value is selected, node_root_password cannot be left blank.
  • KEYPAIR: specifies the key pair used for login. If this value is selected, node_keypair_name cannot be left blank.

node_root_password

No

String

Password of user root for logging in to a cluster node. A password must meet the following requirements:
  • Must be 8 to 26 characters long.
  • Must contain at least four of the following: uppercase letters, lowercase letters, digits, and special characters (!@$%^-_=+[{}]:,./?), but must not contain spaces.
  • Cannot be the username or the username spelled backwards.

node_keypair_name

No

String

Name of a key pair You can use a key pair to log in to the Master node in the cluster.

enterprise_project_id

No

String

Enterprise project ID.

When you create a cluster, associate the enterprise project ID with the cluster.

The default value is 0, indicating the default enterprise project.

To obtain the enterprise project ID, see the id value in the enterprise_project field data structure table in section Querying the Enterprise Project List of the Enterprise Management API Reference.

eip_address

No

String

An EIP bound to an MRS cluster can be used to access MRS Manager. The EIP must have been created and must be in the same region as the cluster.

eip_id

No

String

ID of the bound EIP. This parameter is mandatory when eip_address is configured. To obtain the EIP ID, log in to the VPC console, choose Network > Elastic IP and Bandwidth > Elastic IP, click the EIP to be bound, and obtain the ID in the Basic Information area.

mrs_ecs_default_agency

No

String

Name of the agency bound to a cluster node by default. The value is fixed to MRS_ECS_DEFAULT_AGENCY.

An agency allows ECS or BMS to manage MRS resources. You can configure an agency of the ECS type to automatically obtain the AK/SK to access OBS.

The MRS_ECS_DEFAULT_AGENCY agency has the OBS OperateAccess permission of OBS and the CES FullAccess (for users who have enabled fine-grained policies), CES Administrator, and KMS Administrator permissions in the region where the cluster is located.

template_id

No

String

Template used for node deployment when the cluster type is CUSTOM.

  • mgmt_control_combined_v2: template for jointly deploying the management and control nodes. The management and control roles are co-deployed on the Master node, and data instances are deployed in the same node group. This deployment mode applies to scenarios where the number of control nodes is less than 100, reducing costs.
  • mgmt_control_separated_v2: The management and control roles are deployed on different master nodes, and data instances are deployed in the same node group. This deployment mode is applicable to a cluster with 100 to 500 nodes and delivers better performance in high-concurrency load scenarios.
  • mgmt_control_data_separated_v2: The management role and control role are deployed on different Master nodes, and data instances are deployed in different node groups. This deployment mode is applicable to a cluster with more than 500 nodes. Components can be deployed separately, which can be used for a larger cluster scale.

tags

No

Array of tag objects

Cluster tag For more parameter description, see Table 6.

A maximum of 10 tags can be added to a cluster.

log_collection

No

Integer

Whether to collect logs when cluster creation fails: The default value is 1, indicating that OBS buckets will be created and only used to collect logs that record MRS cluster creation failures. Enumerated values:

  • 0: Do not collect logs.
  • 1: Collect logs.

The default value is 1, indicating that OBS buckets will be created and only used to collect logs that record MRS cluster creation failures.

node_groups

Yes

Array of NodeGroupV2 objects

Information about the node groups in the cluster. For details about the parameters, see Table 7.

bootstrap_scripts

No

Array of BootstrapScript objects

Bootstrap action script information. For more parameter description, see Table 9.

MRS 3.x does not support this parameter.

add_jobs

No

Array of add_jobs objects

You can submit a job when creating a cluster. Currently, only versions earlier than MRS 1.8.7 support this function. Currently, only one job can be submitted. You are advised to use the steps parameter in the Creating a Cluster and Submitting a Job API. For details about this parameter, see Table 10.

log_uri

No

String

The OBS path to which cluster logs are dumped. After the log dump function is enabled, the read and write permissions on the OBS path are required to upload logs. Configure the default agency MRS_ECS_DEFULT_AGENCY or customize an agency with the read and write permissions on the OBS path. For details, see Configuring a Storage-Compute Decoupled Cluster (Agency). This parameter is available only for cluster versions that support dumping cluster logs to OBS.

component_configs

No

Array of ComponentConfig objects

The custom configuration of cluster components. This parameter applies only to cluster versions that support the feature of creating a cluster by customizing component configurations. For details about this parameter, see ComponentConfig.

smn_notify

No

SmnNotify object

SMN alarm notifications. For details about this parameter, see Table 18.

Table 4 ClusterDataConnectorMap

Parameter

Mandatory

Type

Description

map_id

No

Integer

Data connection association ID

connector_id

No

String

Data connection ID

component_name

No

String

Component name

role_type

No

String

Component role type. The options are as follows:

  • hive_metastore: Hive Metastore role
  • hive_data: Hive role
  • hbase_data: HBase role
  • ranger_data: Ranger role

source_type

No

String

Data connection type. The options are as follows:

  • LOCAL_DB: local metadata
  • RDS_POSTGRES: RDS PostgreSQL database
  • RDS_MYSQL: RDS MySQL database
  • gaussdb-mysql: GaussDB(for MySQL)

cluster_id

No

String

ID of the associated cluster

status

No

Integer

Data connection status. The options are as follows:

  • 0: normal
  • 1: in use
Table 5 ChargeInfo

Parameter

Mandatory

Type

Description

charge_mode

Yes

String

Billing mode. The options are as follows:

  • prePaid: indicates the yearly/monthly billing mode. (This mode is now supported for the API for creating a cluster, but is not supported for the API for creating a cluster and submitting a job.)
  • postPaid: indicates the pay-per-use billing mode.

period_type

No

String

Period type. The options are as follows:

  • month: The cluster is billed by month.
  • year: The cluster is billed by year.
  • day: The cluster is billed on a pay-per-use basis.

period_num

No

Integer

Number of periods. This parameter is valid and mandatory only when charge_mode is set to prePaid.

  • If period_type is set to month, the value ranges from 1 to 9.
  • If period_type is set to year, the value ranges from 1 to 3.

is_auto_pay

No

Boolean

Whether the order will be automatically paid. This parameter is available for yearly/monthly mode. By default, the automatic payment is disabled. The options are as follows:

  • true: The system automatically selects available discounts and coupons, and then pays for the order with the account balances. If the automatic payment fails, an order in Pending payment state is generated waiting for manual payment.
  • false: The user needs to pay for the bill after using available discounts and coupons.
Table 6 Tag parameters

Parameter

Mandatory

Type

Description

key

Yes

String

Tag key.

  • It contains a maximum of 36 Unicode characters and cannot be an empty string.
  • The tag key cannot start or end with spaces or contain non-printable ASCII characters (0–31) and special characters (=*<>\,|/).
  • The tag key of a resource must be unique.

value

Yes

String

Tag value.

  • The value can contain 0 to 43 unicode characters.
  • The tag value cannot start or end with spaces or contain non-printable ASCII characters (0–31) and special characters (=*<>\,|/).
Table 7 NodeGroup parameters

Parameter

Mandatory

Type

Description

group_name

Yes

String

Node group name. The value can contain a maximum of 64 characters, including uppercase and lowercase letters, digits and underscores (_). The rules for configuring node groups are as follows:

  • master_node_default_group: master node group, which must be included in all cluster types.
  • core_node_analysis_group: analysis core node group, which must be included in both analysis and hybrid clusters.
  • core_node_streaming_group: streaming core node group, which must be included in both streaming and hybrid clusters.
  • task_node_analysis_group: analysis task node group, which can be selected for analysis clusters and hybrid clusters as needed.
  • task_node_streaming_group: streaming task node group, which can be selected for streaming clusters and hybrid clusters as needed.
  • node_group{x}: node group of a custom cluster. A maximum of nine such node groups can be added for a custom cluster.

node_num

Yes

Integer

Number of nodes. The value ranges from 0 to 500. The maximum number of Core and Task nodes is 500.

node_size

Yes

String

Instance specifications of a node. Example: c3.4xlarge.2.linux.bigdata

The host specifications supported by MRS are determined by CPU, memory, and disk space. For details about instance specifications, see ECS Specifications Used by MRS and BMS Specifications Used by MRS.

Obtain the instance specifications of the corresponding version in the corresponding region from the cluster creation page of the MRS management console.

root_volume

No

Volume object

System disk information of the node. This parameter is optional for some VMs or the system disk of the BMS. This parameter is mandatory in other cases. For details about the parameter description, see Table 8.

data_volume

No

Volume object

Data disk information. This parameter is mandatory when data_volume_count is not 0. For details about this parameter, see Table 8.

data_volume_count

No

Integer

Number of data disks of a node.

Value range: 0 to 10

charge_info

No

ChargeInfo object

The billing type of a node group. The billing types of master and core node groups are the same as those of the cluster. The billing type of the task node group can be different. For details about the parameters, see Table 5.

auto_scaling_policy

No

auto_scaling_policy object

Autoscaling rule corresponding to the node group. For details about the parameters, see Table 11.

assigned_roles

No

Array of strings

This parameter is mandatory when the cluster type is CUSTOM. You can specify the roles deployed in a node group. This parameter is a character string array. Each character string represents a role expression.

Role expression definition:

  • If the role is deployed on all nodes in the node group, set this parameter to <role name>, for example, DataNode.
  • If the role is deployed on a specified subscript node in the node group: <role name>:<index1>,<index2>..., <indexN>, for example, NameNode:1,2. The subscript starts from 1.
  • Some roles support multi-instance deployment (that is, multiple instances of the same role are deployed on a node): <role name>[<instance count>], for example, EsNode[9].

For details about available roles, see Roles and components supported by MRS.

Table 8 Volume

Parameter

Mandatory

Type

Description

type

Yes

String

Disk type.

The following disk types are supported:

  • SATA: common I/O disk
  • SAS: high I/O disk
  • SSD: ultra-high I/O disk

size

Yes

Integer

Specifies the data disk size, in GB. The value range is 10 to 32768.

Table 9 BootstrapScript

Parameter

Mandatory

Type

Description

name

Yes

String

Name of a bootstrap action script. It must be unique in a cluster.

The value can contain only digits, letters, spaces, hyphens (-), and underscores (_) and must not start with a space.

The value can contain 1 to 64 characters.

uri

Yes

String

Path of a bootstrap action script. Set this parameter to an OBS bucket path or a local VM path.

  • OBS bucket path: Enter a script path manually. For example, enter the path of the public sample script provided by MRS. Example: s3a://bootstrap/presto/presto-install.sh. If dualroles is installed, the parameter of the presto-install.sh script is dualroles. If worker is installed, the parameter of the presto-install.sh script is worker. Based on the Presto usage habit, you are advised to install dualroles on the active master nodes and worker on the Core nodes.
  • Local VM path: Enter a script path. The script path must start with a slash (/) and end with .sh.

parameters

No

String

Bootstrap action script parameters.

nodes

Yes

Array of strings

Name of the node group where the bootstrap action script is executed

active_master

No

Boolean

Whether the bootstrap action script runs only on active master nodes.

The default value is false, indicating that the bootstrap action script can run on all master nodes.

before_component_start

No

Boolean

Time when the bootstrap action script is executed. Currently, the following two options are available: Before component start and After component start

The default value is false, indicating that the bootstrap action script is executed after the component is started.

fail_action

Yes

String

Whether to continue executing subsequent scripts and creating a cluster after the bootstrap action script fails to be executed.

  • continue: Continue to execute subsequent scripts.
  • errorout: Stop the action.
The default value is errorout, indicating that the action is stopped.
NOTE:

You are advised to set this parameter to continue in the commissioning phase so that the cluster can continue to be installed and started no matter whether the bootstrap action is successful.

start_time

No

Long

The execution time of one bootstrap action script.

state

No

String

The running status of one bootstrap action script.

  • PENDING
  • IN_PROGRESS
  • SUCCESS
  • FAILURE

action_stages

No

Array of strings

Select the time when the bootstrap action script is executed.

  • BEFORE_COMPONENT_FIRST_START: before initial component starts
  • AFTER_COMPONENT_FIRST_START: after initial component starts
  • BEFORE_SCALE_IN: before scale-in
  • AFTER_SCALE_IN: after scale-in
  • BEFORE_SCALE_OUT: before scale-out
  • AFTER_SCALE_OUT: after scale-out
Table 10 add_jobs parameters

Parameter

Mandatory

Type

Description

job_type

Yes

Integer

Job type code.

  • 1: MapReduce
  • 2: Spark
  • 3: Hive Script
  • 4: HiveQL (not supported currently)
  • 5: DistCp, importing and exporting data (not supported currently)
  • 6: Spark Script
  • 7: Spark SQL, submitting Spark SQL statements (not supported currently).

job_name

Yes

String

Job name. It contains 1 to 64 characters. Only letters, digits, hyphens (-), and underscores (_) are allowed.

NOTE:

Identical job names are allowed but not recommended.

jar_path

No

String

Path of the JAR or SQL file for program execution. The parameter must meet the following requirements:

  • Contains a maximum of 1023 characters, excluding special characters such as ;|&><'$. The parameter value cannot be empty or full of spaces.
  • Files can be stored in HDFS or OBS. The path varies depending on the file system.
    • OBS: The path must start with s3a://. Files or programs encrypted by KMS are not supported.
    • HDFS: The path starts with a slash (/).
  • Spark Script must end with .sql while MapReduce and Spark Jar must end with .jar. sql and jar are case-insensitive.

arguments

No

String

Key parameter for program execution. The parameter is specified by the function of the user's program. MRS is only responsible for loading the parameter.

The parameter contains a maximum of 2047 characters, excluding special characters such as ;|&>'<$, and can be left blank.

input

No

String

Address for inputting data.

Files can be stored in HDFS or OBS. The path varies depending on the file system.
  • OBS: The path must start with s3a://. Files or programs encrypted by KMS are not supported.
  • HDFS: The path starts with a slash (/).

The parameter contains a maximum of 1023 characters, excluding special characters such as ;|&>'<$, and can be left blank.

output

No

String

Address for outputting data.

Files can be stored in HDFS or OBS. The path varies depending on the file system.
  • OBS: The path must start with s3a://.
  • HDFS: The path starts with a slash (/).

If the specified path does not exist, the system will automatically create it.

The parameter contains a maximum of 1023 characters, excluding special characters such as ;|&>'<$, and can be left blank.

job_log

No

String

Path for storing job logs that record job running status.

Files can be stored in HDFS or OBS. The path varies depending on the file system.
  • OBS: The path must start with s3a://.
  • HDFS: The path starts with a slash (/).

The parameter contains a maximum of 1023 characters, excluding special characters such as ;|&>'<$, and can be left blank.

shutdown_cluster

No

Boolean

Whether to delete the cluster after the job execution is complete.

  • true: Yes
  • false: No

file_action

No

String

Data import and export.

  • import
  • export

submit_job_once_cluster_run

Yes

Boolean

  • true: Submit a job during cluster creation.
  • false: Submit a job after the cluster is created.

Set this parameter to true in this example.

hql

No

String

HiveQL statement

hive_script_path

No

String

SQL program path. This parameter is needed by Spark Script and Hive Script jobs only, and must meet the following requirements:

  • Contains a maximum of 1023 characters, excluding special characters such as ;|&><'$. The address cannot be empty or full of spaces.
  • Files can be stored in HDFS or OBS. The path varies depending on the file system.
    • OBS: The path must start with s3a://. Files or programs encrypted by KMS are not supported.
    • HDFS: The path starts with a slash (/).
  • Ends with .sql. sql is case-insensitive.
Table 11 auto_scaling_policy parameters

Parameter

Mandatory

Type

Description

auto_scaling_enable

Yes

Boolean

Whether to enable the auto scaling rule.

min_capacity

Yes

Integer

Minimum number of nodes allowed in the node group.

Value range: [0, 500]

max_capacity

Yes

Integer

Maximum number of nodes in the node group.

Value range: [0, 500]

resources_plans

No

Array of resources_plan objects

Resource plan list. For details, see Table 12. If this parameter is left blank, the resource plan is disabled.

When auto scaling is enabled, either a resource plan or an auto scaling rule must be configured.

exec_scripts

No

Array of scale_script objects

List of custom scaling automation scripts. For details, see Table 13. If this parameter is left blank, a hook script is disabled.

rules

No

Array of rules objects

List of auto scaling rules. For details, see Table 14.

When auto scaling is enabled, either a resource plan or an auto scaling rule must be configured.

Table 12 ResourcesPlan

Parameter

Mandatory

Type

Description

period_type

Yes

String

Cycle type of a resource plan. Currently, only the following cycle type is supported:

  • daily

start_time

Yes

String

The start time of a resource plan. The value is in the format of hour:minute, indicating that the time ranges from 00:00 to 23:59.

end_time

Yes

String

End time of a resource plan. The value is in the same format as that of start_time. The interval between end_time and start_time must be greater than or equal to 30 minutes.

min_capacity

Yes

Integer

Minimum number of the preserved nodes in a node group in a resource plan.

Value range: [0, 500]

max_capacity

Yes

Integer

Maximum number of the preserved nodes in a node group in a resource plan.

Value range: [0, 500]

effective_days

No

Array of strings

Effective date of a resource plan. If this parameter is left blank, it indicates that the resource plan takes effect every day. The options are as follows:

MONDAY, TUESDAY, WEDNESDAY, THURSDAY, FRIDAY, SATURDAY, and SUNDAY

Table 13 scale_script parameters

Parameter

Mandatory

Type

Description

name

Yes

String

Name of a custom automation script. It must be unique in a same cluster.

The value can contain only digits, letters, spaces, hyphens (-), and underscores (_) and must not start with a space.

The value can contain 1 to 64 characters.

uri

Yes

String

Path of a custom automation script. Set this parameter to an OBS bucket path or a local VM path.

  • OBS bucket path: Enter a script path manually. for example, s3a://XXX/scale.sh.
  • Local VM path: Enter a script path. The script path must start with a slash (/) and end with .sh.

parameters

No

String

Parameters of a custom automation script.

  • Multiple parameters are separated by space.
  • The following predefined system parameters can be transferred:
    • ${mrs_scale_node_num}: Number of the nodes to be added or removed
    • ${mrs_scale_type}: Scaling type. The value can be scale_out or scale_in.
    • ${mrs_scale_node_hostnames}: Host names of the nodes to be added or removed
    • ${mrs_scale_node_ips}: IP addresses of the nodes to be added or removed
    • ${mrs_scale_rule_name}: Name of the rule that triggers auto scaling
  • Other user-defined parameters are used in the same way as those of common shell scripts. Parameters are separated by space.

nodes

Yes

List<String>

Type of a node where the custom automation script is executed. The node type can be Master, Core, or Task.

active_master

No

Boolean

Whether the custom automation script runs only on the active master node.

The default value is false, indicating that the custom automation script can run on all Master nodes.

action_stage

Yes

String

Time when a script is executed.

The following four options are supported:

  • before_scale_out: before scale-out
  • before_scale_in: before scale-in
  • after_scale_out: after scale-out
  • after_scale_in: after scale-in

fail_action

Yes

String

Whether to continue to execute subsequent scripts and create a cluster after the custom automation script fails to be executed.

  • continue: Continue to execute subsequent scripts.
  • errorout: Stop the action.
    NOTE:
    • You are advised to set this parameter to continue in the commissioning phase so that the cluster can continue to be installed and started no matter whether the custom automation script is executed successfully.
    • The scale-in operation cannot be undone. Therefore, fail_action must be set to continue for the scripts that are executed after scale-in.
Table 14 rules parameters

Parameter

Mandatory

Type

Description

name

Yes

String

Name of an auto scaling rule.

A cluster name can contain only 1 to 64 characters. Only letters, digits, hyphens (-), and underscores (_) are allowed.

Rule names must be unique in a node group.

description

No

String

Description about an auto scaling rule.

It contains a maximum of 1024 characters.

adjustment_type

Yes

String

Auto scaling rule adjustment type. Possible values:

  • scale_out: cluster scale-out
  • scale_in: cluster scale-in

cool_down_minutes

Yes

Integer

Cluster cooling time after an auto scaling rule is triggered, when no auto scaling operation is performed. The unit is minute.

Value range: 0 to 10,080. One week is equal to 10,080 minutes.

scaling_adjustment

Yes

Integer

Number of nodes that can be adjusted once.

Value range: [1, 100]

trigger

Yes

Trigger object

Condition for triggering a rule. For details, see Table 15.

Table 15 trigger parameters

Parameter

Mandatory

Type

Description

metric_name

Yes

String

Metric name.

This triggering condition makes a judgment according to the value of the metric.

A metric name contains a maximum of 64 characters.

metric_value

Yes

String

Metric threshold to trigger a rule.

The value must be an integer or a number with two decimal places.

comparison_operator

No

String

Metric judgment logic operator. Possible values:

  • LT: less than
  • GT: greater than
  • LTOE: less than or equal to
  • GTOE: greater than or equal to

evaluation_periods

Yes

Integer

Number of consecutive five-minute periods, during which a metric threshold is reached.

Value range: 1 to 288

Table 16 ComponentConfig

Parameter

Mandatory

Type

Description

component_name

Yes

String

The component name

configs

No

Array of Config objects

The component configuration item list. For details about this parameter, see Table 17.

Table 17 Config

Parameter

Mandatory

Type

Description

key

Yes

String

The configuration name. Only the configuration names displayed on the MRS component configuration page are supported.

value

Yes

String

The configuration value

config_file_name

Yes

String

The configuration file name. Only the file names displayed on the MRS component configuration page are supported.

Table 18 SmnNotify

Parameter

Mandatory

Type

Description

topic_urn

No

String

SMN topic URN. This parameter is mandatory if alarm notifications are enabled.

subscription_name

No

String

Name of a subscription rule. If this parameter is not set, the default value default_alert_rule will be used.

Response Parameters

Status code: 200

Table 19 Response parameters

Parameter

Type

Description

cluster_id

String

Cluster ID, which is returned by the system after the cluster is created.

Example Request

  • Create an MRS 3.2.0-LTS.1 cluster for analysis. There are a Master node group with two nodes, a Core node group with three nodes, and a Task node group with three nodes. Autoscaling is enabled from 12:00 to 13:00 every Monday.
    POST /v2/{project_id}/clusters
    
    {
      "cluster_version" : "MRS 3.2.0-LTS.1",
      "cluster_name" : "mrs_DyJA_dm",
      "cluster_type" : "ANALYSIS",
      "charge_info" : {
        "charge_mode" : "postPaid"
      },
      "region" : "",
      "availability_zone" : "",
      "vpc_name" : "vpc-37cd",
      "subnet_id" : "1f8c5ca6-1f66-4096-bb00-baf175954f6e",
      "subnet_name" : "subnet",
      "components" : "Hadoop,Spark2x,HBase,Hive,Hue,Flink,Oozie,Ranger,Tez",
      "safe_mode" : "KERBEROS",
      "manager_admin_password" : "your password",
      "login_mode" : "PASSWORD",
      "node_root_password" : "your password",
      "log_collection" : 1,
      "mrs_ecs_default_agency" : "MRS_ECS_DEFAULT_AGENCY",
      "tags" : [ {
        "key" : "tag1",
        "value" : "111"
      }, {
        "key" : "tag2",
        "value" : "222"
      } ],
      "node_groups" : [ {
        "group_name" : "master_node_default_group",
        "node_num" : 2,
        "node_size" : "rc3.4xlarge.4.linux.bigdata",
        "root_volume" : {
          "type" : "SAS",
          "size" : 480
        },
        "data_volume" : {
          "type" : "SAS",
          "size" : 600
        },
        "data_volume_count" : 1
      }, {
        "group_name" : "core_node_analysis_group",
        "node_num" : 3,
        "node_size" : "rc3.4xlarge.4.linux.bigdata",
        "root_volume" : {
          "type" : "SAS",
          "size" : 480
        },
        "data_volume" : {
          "type" : "SAS",
          "size" : 600
        },
        "data_volume_count" : 1
      }, {
        "group_name" : "task_node_analysis_group",
        "node_num" : 3,
        "node_size" : "rc3.4xlarge.4.linux.bigdata",
        "root_volume" : {
          "type" : "SAS",
          "size" : 480
        },
        "data_volume" : {
          "type" : "SAS",
          "size" : 600
        },
        "data_volume_count" : 1,
        "auto_scaling_policy" : {
          "auto_scaling_enable" : true,
          "min_capacity" : 0,
          "max_capacity" : 1,
          "resources_plans" : [ {
            "period_type" : "daily",
            "start_time" : "12:00",
            "end_time" : "13:00",
            "min_capacity" : 2,
            "max_capacity" : 3,
            "effective_days" : [ "MONDAY" ]
          } ],
          "exec_scripts" : [ {
            "name" : "test",
            "uri" : "s3a://obs-mrstest/bootstrap/basic_success.sh",
            "parameters" : "",
            "nodes" : [ "master_node_default_group", "core_node_analysis_group", "task_node_analysis_group" ],
            "active_master" : false,
            "action_stage" : "before_scale_out",
            "fail_action" : "continue"
          } ],
          "rules" : [ {
            "name" : "default-expand-1",
            "description" : "",
            "adjustment_type" : "scale_out",
            "cool_down_minutes" : 5,
            "scaling_adjustment" : "1",
            "trigger" : {
              "metric_name" : "YARNAppRunning",
              "metric_value" : 100,
              "comparison_operator" : "GTOE",
              "evaluation_periods" : "1"
            }
          } ]
        }
      } ]
    }
  • Create an MRS 3.1.0 cluster for stream analysis. There are a Master node group with two nodes, a Core node group with three nodes, and a Task node group with no node. Autoscaling is enabled from 12:00 to 13:00 every Monday.
    POST /v2/{project_id}/clusters
    
    {
      "cluster_version" : "MRS 3.2.0-LTS.1",
      "cluster_name" : "mrs_Dokle_dm",
      "cluster_type" : "STREAMING",
      "charge_info" : {
        "charge_mode" : "postPaid"
      },
      "region" : "",
      "availability_zone" : "",
      "vpc_name" : "vpc-37cd",
      "subnet_id" : "1f8c5ca6-1f66-4096-bb00-baf175954f6e",
      "subnet_name" : "subnet",
      "components" : "Storm,Kafka,Flume,Ranger",
      "safe_mode" : "KERBEROS",
      "manager_admin_password" : "your password",
      "login_mode" : "PASSWORD",
      "node_root_password" : "your password",
      "log_collection" : 1,
      "mrs_ecs_default_agency" : "MRS_ECS_DEFAULT_AGENCY",
      "tags" : [ {
        "key" : "tag1",
        "value" : "111"
      }, {
        "key" : "tag2",
        "value" : "222"
      } ],
      "node_groups" : [ {
        "group_name" : "master_node_default_group",
        "node_num" : 2,
        "node_size" : "rc3.4xlarge.4.linux.bigdata",
        "root_volume" : {
          "type" : "SAS",
          "size" : 480
        },
        "data_volume" : {
          "type" : "SAS",
          "size" : 600
        },
        "data_volume_count" : 1
      }, {
        "group_name" : "core_node_streaming_group",
        "node_num" : 3,
        "node_size" : "rc3.4xlarge.4.linux.bigdata",
        "root_volume" : {
          "type" : "SAS",
          "size" : 480
        },
        "data_volume" : {
          "type" : "SAS",
          "size" : 600
        },
        "data_volume_count" : 1
      }, {
        "group_name" : "task_node_streaming_group",
        "node_num" : 0,
        "node_size" : "rc3.4xlarge.4.linux.bigdata",
        "root_volume" : {
          "type" : "SAS",
          "size" : 480
        },
        "data_volume" : {
          "type" : "SAS",
          "size" : 600
        },
        "data_volume_count" : 1,
        "auto_scaling_policy" : {
          "auto_scaling_enable" : true,
          "min_capacity" : 0,
          "max_capacity" : 1,
          "resources_plans" : [ {
            "period_type" : "daily",
            "start_time" : "12:00",
            "end_time" : "13:00",
            "min_capacity" : 2,
            "max_capacity" : 3,
            "effective_days" : [ "MONDAY" ]
          } ],
          "rules" : [ {
            "name" : "default-expand-1",
            "description" : "",
            "adjustment_type" : "scale_out",
            "cool_down_minutes" : 5,
            "scaling_adjustment" : "1",
            "trigger" : {
              "metric_name" : "StormSlotAvailablePercentage",
              "metric_value" : 100,
              "comparison_operator" : "LTOE",
              "evaluation_periods" : "1"
            }
          } ]
        }
      } ]
    }
  • Create an MRS 3.1.0 cluster for hybrid analysis. There are a Master node group with two nodes, two Core node groups with three nodes in each, and two Task node groups with three nodes in one group and one node in the other group.
    POST /v2/{project_id}/clusters
    
    {
      "cluster_version" : "MRS 3.2.0-LTS.1",
      "cluster_name" : "mrs_onmm_dm",
      "cluster_type" : "MIXED",
      "charge_info" : {
        "charge_mode" : "postPaid"
      },
      "region" : "",
      "availability_zone" : "",
      "vpc_name" : "vpc-37cd",
      "subnet_id" : "1f8c5ca6-1f66-4096-bb00-baf175954f6e",
      "subnet_name" : "subnet",
      "components" : "Hadoop,Spark2x,HBase,Hive,Hue,Loader,Kafka,Storm,Flume,Flink,Oozie,Ranger,Tez",
      "safe_mode" : "KERBEROS",
      "manager_admin_password" : "your password",
      "login_mode" : "PASSWORD",
      "node_root_password" : "your password",
      "log_collection" : 1,
      "mrs_ecs_default_agency" : "MRS_ECS_DEFAULT_AGENCY",
      "tags" : [ {
        "key" : "tag1",
        "value" : "111"
      }, {
        "key" : "tag2",
        "value" : "222"
      } ],
      "node_groups" : [ {
        "group_name" : "master_node_default_group",
        "node_num" : 2,
        "node_size" : "Sit3.4xlarge.4.linux.bigdata",
        "root_volume" : {
          "type" : "SAS",
          "size" : 480
        },
        "data_volume" : {
          "type" : "SAS",
          "size" : 600
        },
        "data_volume_count" : 1
      }, {
        "group_name" : "core_node_streaming_group",
        "node_num" : 3,
        "node_size" : "Sit3.4xlarge.4.linux.bigdata",
        "root_volume" : {
          "type" : "SAS",
          "size" : 480
        },
        "data_volume" : {
          "type" : "SAS",
          "size" : 600
        },
        "data_volume_count" : 1
      }, {
        "group_name" : "core_node_analysis_group",
        "node_num" : 3,
        "node_size" : "Sit3.4xlarge.4.linux.bigdata",
        "root_volume" : {
          "type" : "SAS",
          "size" : 480
        },
        "data_volume" : {
          "type" : "SAS",
          "size" : 600
        },
        "data_volume_count" : 1
      }, {
        "group_name" : "task_node_analysis_group",
        "node_num" : 1,
        "node_size" : "Sit3.4xlarge.4.linux.bigdata",
        "root_volume" : {
          "type" : "SAS",
          "size" : 480
        },
        "data_volume" : {
          "type" : "SAS",
          "size" : 600
        },
        "data_volume_count" : 1
      }, {
        "group_name" : "task_node_streaming_group",
        "node_num" : 0,
        "node_size" : "Sit3.4xlarge.4.linux.bigdata",
        "root_volume" : {
          "type" : "SAS",
          "size" : 480
        },
        "data_volume" : {
          "type" : "SAS",
          "size" : 600
        },
        "data_volume_count" : 1
      } ]
    }
  • Create a cluster where custom management nodes and control nodes are the same nodes. The cluster version is MRS 3.1.0. There is a Master node group with three nodes, two Core node groups with three nodes in one group and one node in the other group.
    POST /v2/{project_id}/clusters
    
    {
      "cluster_version" : "MRS 3.2.0-LTS.1",
      "cluster_name" : "mrs_heshe_dm",
      "cluster_type" : "CUSTOM",
      "charge_info" : {
        "charge_mode" : "postPaid"
      },
      "region" : "",
      "availability_zone" : "",
      "vpc_name" : "vpc-37cd",
      "subnet_id" : "1f8c5ca6-1f66-4096-bb00-baf175954f6e",
      "subnet_name" : "subnet",
      "components" : "Hadoop,Spark2x,HBase,Hive,Hue,Kafka,Flume,Flink,Oozie,HetuEngine,Ranger,Tez,ZooKeeper,ClickHouse",
      "safe_mode" : "KERBEROS",
      "manager_admin_password" : "your password",
      "login_mode" : "PASSWORD",
      "node_root_password" : "your password",
      "mrs_ecs_default_agency" : "MRS_ECS_DEFAULT_AGENCY",
      "template_id" : "mgmt_control_combined_v2",
      "log_collection" : 1,
      "tags" : [ {
        "key" : "tag1",
        "value" : "111"
      }, {
        "key" : "tag2",
        "value" : "222"
      } ],
      "node_groups" : [ {
        "group_name" : "master_node_default_group",
        "node_num" : 3,
        "node_size" : "Sit3.4xlarge.4.linux.bigdata",
        "root_volume" : {
          "type" : "SAS",
          "size" : 480
        },
        "data_volume" : {
          "type" : "SAS",
          "size" : 600
        },
        "data_volume_count" : 1,
        "assigned_roles" : [ "OMSServer:1,2", "SlapdServer:1,2", "KerberosServer:1,2", "KerberosAdmin:1,2", "quorumpeer:1,2,3", "NameNode:2,3", "Zkfc:2,3", "JournalNode:1,2,3", "ResourceManager:2,3", "JobHistoryServer:2,3", "DBServer:1,3", "Hue:1,3", "LoaderServer:1,3", "MetaStore:1,2,3", "WebHCat:1,2,3", "HiveServer:1,2,3", "HMaster:2,3", "MonitorServer:1,2", "Nimbus:1,2", "UI:1,2", "JDBCServer2x:1,2,3", "JobHistory2x:2,3", "SparkResource2x:1,2,3", "oozie:2,3", "LoadBalancer:2,3", "TezUI:1,3", "TimelineServer:3", "RangerAdmin:1,2", "UserSync:2", "TagSync:2", "KerberosClient", "SlapdClient", "meta", "HSConsole:2,3", "FlinkResource:1,2,3", "DataNode:1,2,3", "NodeManager:1,2,3", "IndexServer2x:1,2", "ThriftServer:1,2,3", "RegionServer:1,2,3", "ThriftServer1:1,2,3", "RESTServer:1,2,3", "Broker:1,2,3", "Supervisor:1,2,3", "Logviewer:1,2,3", "Flume:1,2,3", "HSBroker:1,2,3" ]
      }, {
        "group_name" : "node_group_1",
        "node_num" : 3,
        "node_size" : "Sit3.4xlarge.4.linux.bigdata",
        "root_volume" : {
          "type" : "SAS",
          "size" : 480
        },
        "data_volume" : {
          "type" : "SAS",
          "size" : 600
        },
        "data_volume_count" : 1,
        "assigned_roles" : [ "DataNode", "NodeManager", "RegionServer", "Flume:1", "Broker", "Supervisor", "Logviewer", "HBaseIndexer", "KerberosClient", "SlapdClient", "meta", "HSBroker:1,2", "ThriftServer", "ThriftServer1", "RESTServer", "FlinkResource" ]
      }, {
        "group_name" : "node_group_2",
        "node_num" : 1,
        "node_size" : "Sit3.4xlarge.4.linux.bigdata",
        "root_volume" : {
          "type" : "SAS",
          "size" : 480
        },
        "data_volume" : {
          "type" : "SAS",
          "size" : 600
        },
        "data_volume_count" : 1,
        "assigned_roles" : [ "NodeManager", "KerberosClient", "SlapdClient", "meta", "FlinkResource" ]
      } ]
    }
  • Create a cluster where custom management nodes and control nodes are independent nodes. The cluster version is MRS 3.1.0. There a Master node group with five nodes and a Core node group with three nodes.
    POST /v2/{project_id}/clusters
    
    {
      "cluster_version" : "MRS 3.2.0-LTS.1",
      "cluster_name" : "mrs_jdRU_dm01",
      "cluster_type" : "CUSTOM",
      "charge_info" : {
        "charge_mode" : "postPaid"
      },
      "region" : "",
      "availability_zone" : "",
      "vpc_name" : "vpc-37cd",
      "subnet_id" : "1f8c5ca6-1f66-4096-bb00-baf175954f6e",
      "subnet_name" : "subnet",
      "components" : "Hadoop,Spark2x,HBase,Hive,Hue,Kafka,Flume,Flink,Oozie,HetuEngine,Ranger,Tez,Ranger,Tez,ZooKeeper,ClickHouse",
      "safe_mode" : "KERBEROS",
      "manager_admin_password" : "your password",
      "login_mode" : "PASSWORD",
      "node_root_password" : "your password",
      "mrs_ecs_default_agency" : "MRS_ECS_DEFAULT_AGENCY",
      "log_collection" : 1,
      "template_id" : "mgmt_control_separated_v2",
      "tags" : [ {
        "key" : "aaa",
        "value" : "111"
      }, {
        "key" : "bbb",
        "value" : "222"
      } ],
      "node_groups" : [ {
        "group_name" : "master_node_default_group",
        "node_num" : 5,
        "node_size" : "rc3.4xlarge.4.linux.bigdata",
        "root_volume" : {
          "type" : "SAS",
          "size" : 480
        },
        "data_volume" : {
          "type" : "SAS",
          "size" : 600
        },
        "data_volume_count" : 1,
        "assigned_roles" : [ "OMSServer:1,2", "SlapdServer:3,4", "KerberosServer:3,4", "KerberosAdmin:3,4", "quorumpeer:3,4,5", "NameNode:4,5", "Zkfc:4,5", "JournalNode:1,2,3,4,5", "ResourceManager:4,5", "JobHistoryServer:4,5", "DBServer:3,5", "Hue:1,2", "LoaderServer:1,2", "MetaStore:1,2,3,4,5", "WebHCat:1,2,3,4,5", "HiveServer:1,2,3,4,5", "HMaster:4,5", "MonitorServer:1,2", "Nimbus:1,2", "UI:1,2", "JDBCServer2x:1,2,3,4,5", "JobHistory2x:4,5", "SparkResource2x:1,2,3,4,5", "oozie:1,2", "LoadBalancer:1,2", "TezUI:1,2", "TimelineServer:5", "RangerAdmin:1,2", "KerberosClient", "SlapdClient", "meta", "HSConsole:1,2", "FlinkResource:1,2,3,4,5", "DataNode:1,2,3,4,5", "NodeManager:1,2,3,4,5", "IndexServer2x:1,2", "ThriftServer:1,2,3,4,5", "RegionServer:1,2,3,4,5", "ThriftServer1:1,2,3,4,5", "RESTServer:1,2,3,4,5", "Broker:1,2,3,4,5", "Supervisor:1,2,3,4,5", "Logviewer:1,2,3,4,5", "Flume:1,2,3,4,5", "HBaseIndexer:1,2,3,4,5", "TagSync:1", "UserSync:1" ]
      }, {
        "group_name" : "node_group_1",
        "node_num" : 3,
        "node_size" : "rc3.4xlarge.4.linux.bigdata",
        "root_volume" : {
          "type" : "SAS",
          "size" : 480
        },
        "data_volume" : {
          "type" : "SAS",
          "size" : 600
        },
        "data_volume_count" : 1,
        "assigned_roles" : [ "DataNode", "NodeManager", "RegionServer", "Flume:1", "Broker", "Supervisor", "Logviewer", "HBaseIndexer", "KerberosClient", "SlapdClient", "meta", "HSBroker:1,2", "ThriftServer", "ThriftServer1", "RESTServer", "FlinkResource" ]
      } ]
    }
  • Create a cluster where data nodes are deployed independently from other nodes. The cluster version is MRS 3.1.0. There are a Master node group with nine nodes, four Core node groups with three nodes in each group.
    POST /v2/{project_id}/clusters
    
    {
      "cluster_version" : "MRS 3.2.0-LTS.1",
      "cluster_name" : "mrs_jdRU_dm02",
      "cluster_type" : "CUSTOM",
      "charge_info" : {
        "charge_mode" : "postPaid"
      },
      "region" : "",
      "availability_zone" : "",
      "vpc_name" : "vpc-37cd",
      "subnet_id" : "1f8c5ca6-1f66-4096-bb00-baf175954f6e",
      "subnet_name" : "subnet",
      "components" : "Hadoop,Spark2x,HBase,Hive,Hue,Kafka,Flume,Flink,Oozie,Ranger,Tez,Ranger,Tez,ZooKeeper,ClickHouse",
      "safe_mode" : "KERBEROS",
      "manager_admin_password" : "your password",
      "login_mode" : "PASSWORD",
      "node_root_password" : "your password",
      "mrs_ecs_default_agency" : "MRS_ECS_DEFAULT_AGENCY",
      "template_id" : "mgmt_control_data_separated_v2",
      "log_collection" : 1,
      "tags" : [ {
        "key" : "aaa",
        "value" : "111"
      }, {
        "key" : "bbb",
        "value" : "222"
      } ],
      "node_groups" : [ {
        "group_name" : "master_node_default_group",
        "node_num" : 9,
        "node_size" : "rc3.4xlarge.4.linux.bigdata",
        "root_volume" : {
          "type" : "SAS",
          "size" : 480
        },
        "data_volume" : {
          "type" : "SAS",
          "size" : 600
        },
        "data_volume_count" : 1,
        "assigned_roles" : [ "OMSServer:1,2", "SlapdServer:5,6", "KerberosServer:5,6", "KerberosAdmin:5,6", "quorumpeer:5,6,7,8,9", "NameNode:3,4", "Zkfc:3,4", "JournalNode:5,6,7", "ResourceManager:8,9", "JobHistoryServer:8", "DBServer:8,9", "Hue:8,9", "FlinkResource:3,4", "LoaderServer:3,5", "MetaStore:8,9", "WebHCat:5", "HiveServer:8,9", "HMaster:8,9", "FTP-Server:3,4", "MonitorServer:3,4", "Nimbus:8,9", "UI:8,9", "JDBCServer2x:8,9", "JobHistory2x:8,9", "SparkResource2x:5,6,7", "oozie:4,5", "EsMaster:7,8,9", "LoadBalancer:8,9", "TezUI:5,6", "TimelineServer:5", "RangerAdmin:4,5", "UserSync:5", "TagSync:5", "KerberosClient", "SlapdClient", "meta", "HSBroker:5", "HSConsole:3,4", "FlinkResource:3,4" ]
      }, {
        "group_name" : "node_group_1",
        "node_num" : 3,
        "node_size" : "rc3.4xlarge.4.linux.bigdata",
        "root_volume" : {
          "type" : "SAS",
          "size" : 480
        },
        "data_volume" : {
          "type" : "SAS",
          "size" : 600
        },
        "data_volume_count" : 1,
        "assigned_roles" : [ "DataNode", "NodeManager", "RegionServer", "Flume:1", "GraphServer", "KerberosClient", "SlapdClient", "meta", "HSBroker:1,2" ]
      }, {
        "group_name" : "node_group_2",
        "node_num" : 3,
        "node_size" : "rc3.4xlarge.4.linux.bigdata",
        "root_volume" : {
          "type" : "SAS",
          "size" : 480
        },
        "data_volume" : {
          "type" : "SAS",
          "size" : 600
        },
        "data_volume_count" : 1,
        "assigned_roles" : [ "HBaseIndexer", "SolrServer[3]", "EsNode[2]", "KerberosClient", "SlapdClient", "meta", "SolrServerAdmin:1,2" ]
      }, {
        "group_name" : "node_group_3",
        "node_num" : 3,
        "node_size" : "rc3.4xlarge.4.linux.bigdata",
        "root_volume" : {
          "type" : "SAS",
          "size" : 480
        },
        "data_volume" : {
          "type" : "SAS",
          "size" : 600
        },
        "data_volume_count" : 1,
        "assigned_roles" : [ "Redis[2]", "KerberosClient", "SlapdClient", "meta" ]
      }, {
        "group_name" : "node_group_4",
        "node_num" : 3,
        "node_size" : "rc3.4xlarge.4.linux.bigdata",
        "root_volume" : {
          "type" : "SAS",
          "size" : 480
        },
        "data_volume" : {
          "type" : "SAS",
          "size" : 600
        },
        "data_volume_count" : 1,
        "assigned_roles" : [ "Broker", "Supervisor", "Logviewer", "KerberosClient", "SlapdClient", "meta" ]
      } ]
    }

Example Response

  • Example of a successful response
    {
    	"cluster_id": "da1592c2-bb7e-468d-9ac9-83246e95447a"
    }
  • Example of a failed response
    {
    	"error_code": "MRS.0002",
    	"error_msg": "The parameter is invalid."
    }

Status Codes

See Status Codes.

Error Codes

See Error Codes.