Creating a Cluster
Function
This API is used to create an MRS cluster.
Before using the API, you need to obtain the resources listed in Table 1.
Resource |
How to Obtain |
---|---|
VPC |
See operation instructions in Querying VPCs and Creating a VPC in the VPC API Reference. |
Subnet |
See operation instructions in Querying Subnets and Creating a Subnet in the VPC API Reference. |
Key Pair |
See operation instructions in Querying SSH Key Pairs and Creating and Importing an SSH Key Pair in the ECS API Reference. |
Zone |
See Endpoints for details about regions and AZs. |
Version |
Currently, MRS 1.9.2, 3.1.0, 3.1.5, 3.1.2-LTS.3, and 3.2.0-LTS.1 are supported. |
Component |
|
Constraints
None
Debugging
You can debug this API through automatic authentication in API Explorer. API Explorer can automatically generate sample SDK code and provide the sample SDK code debugging.
URI
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
project_id |
Yes |
String |
Explanation Project ID. For details about how to obtain the project ID, see Obtaining a Project ID. Constraints N/A Value range The value must consist of 1 to 64 characters. Only letters and digits are allowed. Default value None |
Request Parameters
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
cluster_version |
Yes |
String |
Explanation Cluster version. Constraints None Value range
Default value N/A |
cluster_name |
Yes |
String |
Explanation Cluster name Constraints N/A Value range
Default value N/A |
cluster_type |
Yes |
String |
Explanation Cluster type. Constraints N/A Value range
Default value N/A |
charge_info |
No |
object |
Explanation The billing type. For details, see Table 5. Constraints N/A |
region |
Yes |
String |
Explanation Information about the region where the cluster is located. For details, see Endpoints. Constraints N/A Value range N/A Default value N/A |
is_dec_project |
No |
Boolean |
Explanation Whether the cluster is specific for the DeC. Constraints N/A Value range
Default value false |
vpc_name |
Yes |
String |
Explanation Name of the VPC where the subnet locates. Perform the following operations to obtain the VPC name from the VPC management console:
Constraints N/A Value range N/A Default value N/A |
subnet_id |
No |
String |
Explanation Subnet ID, which can be obtained by performing the following operations on the VPC management console:
Constraints At least one of subnet_id and subnet_name must be configured. If the two parameters are configured but do not match the same subnet, the cluster fails to create. subnet_id is recommended. Value range N/A Default value N/A |
subnet_name |
Yes |
String |
Subnet name. Perform the following operations to obtain the subnet name from the VPC management console:
Constraints At least one of subnet_id and subnet_name must be configured. If the two parameters are configured but do not match the same subnet, the cluster fails to create. If only subnet_name is configured and subnets with the same name exist in the VPC, the first subnet name in the VPC is used when a cluster is created. subnet_id is recommended. Value range N/A Default value N/A |
components |
Yes |
String |
Explanation List of component names, which are separated by commas (,). For details about the component names, see the component list of each version in Table 1. Constraints N/A Value range N/A Default value N/A |
external_datasources |
No |
Array of ClusterDataConnectorMap objects |
Explanation When deploying components such as Hive and Ranger, you can associate data connections and store metadata in associated databases. For details about the parameters, see Table 4. Constraints N/A |
availability_zone |
Yes |
String |
Explanation AZ name. Multi-AZ clusters are not supported. See Endpoints for details about AZs. Constraints N/A Value range N/A Default value N/A |
security_groups_id |
No |
String |
Explanation Security group ID of the cluster.
Constraints N/A Value range N/A Default value N/A |
auto_create_default_security_group |
No |
Boolean |
Explanation Whether to create the default security group for the MRS cluster. Constraints If this parameter is set to true, the default security group will be created for the cluster regardless of whether security_groups_id is specified. Value range
Default value false |
safe_mode |
Yes |
String |
Explanation Run mode of an MRS cluster. Constraints N/A Value range
Default value N/A |
manager_admin_password |
Yes |
String |
Explanation Password of the MRS Manager administrator. Constraints N/A Value range
Default value N/A |
login_mode |
Yes |
String |
Explanation Node login mode. Constraints N/A Value range
Default value N/A |
node_root_password |
No |
String |
Explanation Password of user root for logging in to a cluster node. Constraints N/A Value range
A password must meet the following requirements:
Default value N/A |
node_keypair_name |
No |
String |
Explanation Name of a key pair You can use a key pair to log in to the Master node in the cluster. Constraints N/A Value range N/A Default value N/A |
enterprise_project_id |
No |
String |
Explanation Enterprise project ID. When you create a cluster, associate the enterprise project ID with the cluster. The default value is 0, indicating the default enterprise project. To obtain the enterprise project ID, see the id value in the enterprise_project field data structure table in section Querying the Enterprise Project List of the Enterprise Management API Reference. Constraints N/A Value range N/A Default value The default value is 0, indicating the default enterprise project. |
eip_address |
No |
String |
Explanation An EIP bound to an MRS cluster can be used to access MRS Manager. The EIP must have been created and must be in the same region as the cluster. Constraints N/A Value range N/A Default value N/A |
eip_id |
No |
String |
Explanation ID of the bound EIP. This parameter is mandatory when eip_address is configured. To obtain the EIP ID, log in to the VPC console, choose Network > Elastic IP and Bandwidth > Elastic IP, click the EIP to be bound, and obtain the ID in the Basic Information area. Constraints N/A Value range N/A Default value N/A |
mrs_ecs_default_agency |
No |
String |
Explanation Name of the agency bound to a cluster node by default. The value is fixed to MRS_ECS_DEFAULT_AGENCY. An agency allows ECS or BMS to manage MRS resources. You can configure an agency of the ECS type to automatically obtain the AK/SK to access OBS. The MRS_ECS_DEFAULT_AGENCY agency has the OBS OperateAccess permission of OBS and the CES FullAccess (for users who have enabled fine-grained policies), CES Administrator, and KMS Administrator permissions in the region where the cluster is located. Constraints N/A Value range N/A Default value N/A |
template_id |
No |
String |
Explanation Template used for node deployment when the cluster type is CUSTOM.
Constraints N/A Value range N/A Default value N/A |
tags |
No |
Array of tag objects |
Explanation Cluster tag For more parameter description, see Table 6. Constraints A cluster allows a maximum of 10 tags. A tag name (key) must be unique in a cluster. |
log_collection |
No |
Integer |
Explanation Whether to collect logs when cluster creation fails. Constraints N/A Value range
Default value 1 |
node_groups |
Yes |
Array of NodeGroupV2 objects |
Explanation Information about the node groups in the cluster. For details about the parameters, see Table 7. Constraints N/A |
bootstrap_scripts |
No |
Array of BootstrapScript objects |
Explanation Bootstrap action script information. For more parameter description, see Table 9. MRS 3.x does not support this parameter. Constraints N/A |
add_jobs |
No |
Array of add_jobs objects |
Explanation You can submit a job when creating a cluster. Currently, only versions earlier than MRS 1.8.7 support this function. Currently, only one job can be submitted. You are advised to use the steps parameter in the Creating a Cluster and Submitting a Job API. For details about this parameter, see Table 10. Constraints There must be no more than 1 record. |
log_uri |
No |
String |
Explanation The OBS path to which cluster logs are dumped. After the log dump function is enabled, the read and write permissions on the OBS path are required to upload logs. Configure the default agency MRS_ECS_DEFULT_AGENCY or customize an agency with the read and write permissions on the OBS path. For details, see Configuring a Storage-Compute Decoupled Cluster (Agency). This parameter is available only for cluster versions that support dumping cluster logs to OBS. Constraints N/A Value range N/A Default value N/A |
component_configs |
No |
Array of ComponentConfig objects |
Explanation The custom configuration of cluster components. This parameter applies only to cluster versions that support the feature of creating a cluster by customizing component configurations. For details about this parameter, see ComponentConfig. Constraints The number of records cannot exceed 50. |
smn_notify |
No |
SmnNotify object |
Explanation SMN alarm notifications. For details about this parameter, see Table 18. Constraints N/A |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
map_id |
No |
Integer |
Explanation Data connection association ID Constraints N/A Value range N/A Default value N/A |
connector_id |
No |
String |
Explanation Data connection ID Constraints N/A Value range N/A Default value N/A |
component_name |
No |
String |
Explanation Component name Constraints N/A Value range N/A Default value N/A |
role_type |
No |
String |
Explanation Component role type. Constraints N/A Value range
Default value N/A |
source_type |
No |
String |
Explanation Data connection type. Constraints N/A Value range
Default value N/A |
cluster_id |
No |
String |
Explanation ID of the associated cluster Constraints N/A Value range The value can contain 1 to 64 characters, including only letters, digits, underscores (_), and hyphens (-). Default value N/A |
status |
No |
Integer |
Explanation Data connection status. Constraints N/A Value range
Default value N/A |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
charge_mode |
Yes |
String |
Explanation Billing mode. Constraints N/A Value range
Default value N/A |
period_type |
No |
String |
Explanation Period type. Constraints N/A Value range
Default value N/A |
period_num |
No |
Integer |
Explanation Number of periods. Constraints This parameter is valid and mandatory only when charge_mode is set to prePaid. Value range
Default value N/A |
is_auto_pay |
No |
Boolean |
Explanation Whether the order will be automatically paid. This parameter is available for yearly/monthly mode. By default, the automatic payment is disabled. Constraints N/A Value range
Default value N/A |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
key |
Yes |
String |
Explanation Tag key. Constraints N/A Value range
Default value N/A |
value |
Yes |
String |
Explanation Tag value. Constraints N/A Value range
Default value N/A |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
group_name |
Yes |
String |
Explanation Node group name. Constraints N/A Value range The value can contain a maximum of 64 characters, including uppercase and lowercase letters, digits and underscores (_). The rules for configuring node groups are as follows:
Default value N/A |
node_num |
Yes |
Integer |
Explanation Number of nodes. Constraints The total number of Core and Task nodes cannot exceed 500. Value range 0-500 Default value N/A |
node_size |
Yes |
String |
Explanation Instance specifications of a node. Example: c3.4xlarge.2.linux.bigdata The host specifications supported by MRS are determined by CPU, memory, and disk space. For details about instance specifications, see ECS Specifications Used by MRS and BMS Specifications Used by MRS. Obtain the instance specifications of the corresponding version in the corresponding region from the cluster creation page of the MRS management console. Constraints N/A Value range N/A Default value N/A |
root_volume |
No |
Volume object |
Explanation System disk information of the node. This parameter is optional for some VMs or the system disk of the BMS. This parameter is mandatory in other cases. For details about the parameter description, see Table 8. Constraints N/A |
data_volume |
No |
Volume object |
Explanation Data disk information Constraints This parameter is mandatory when data_volume_count is not 0. For details about this parameter, see Table 8. |
data_volume_count |
No |
Integer |
Explanation Number of data disks of a node. Constraints N/A Value range 0-20 Default value N/A |
charge_info |
No |
ChargeInfo object |
Explanation The billing type of a node group. The billing types of master and core node groups are the same as those of the cluster. The billing type of the task node group can be different. For details about the parameters, see Table 5. Constraints N/A |
auto_scaling_policy |
No |
auto_scaling_policy object |
Explanation Autoscaling rule corresponding to the node group. For details about the parameters, see Table 11. Constraints N/A |
assigned_roles |
No |
Array of strings |
Explanation This parameter is mandatory when the cluster type is CUSTOM. You can specify the roles deployed in a node group. This parameter is a character string array. Each character string represents a role expression. Role expression definition:
For details about available roles, see Roles and components supported by MRS. Constraints N/A |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
type |
Yes |
String |
Explanation Disk type. Constraints N/A Value range
Default value N/A |
size |
Yes |
Integer |
Explanation Data disk size in GB. Constraints N/A Value range 10-32768 Default value N/A |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
name |
Yes |
String |
Explanation Name of a bootstrap action script. Constraints N/A Value range The names of bootstrap action scripts in the same cluster must be unique. The value can contain only digits, letters, spaces, hyphens (-), and underscores (_) and must not start with a space. The value can contain 1 to 64 characters. Default value N/A |
uri |
Yes |
String |
Explanation Path of a bootstrap action script. Set this parameter to an OBS bucket path or a local VM path.
Constraints N/A Value range N/A Default value N/A |
parameters |
No |
String |
Explanation Bootstrap action script parameters Constraints N/A Value range N/A Default value N/A |
nodes |
Yes |
Array of strings |
Explanation Name of the node group where the bootstrap action script is executed Constraints N/A |
active_master |
No |
Boolean |
Explanation Whether the bootstrap action script runs only on active Master nodes. Constraints N/A Value range
Default value N/A |
before_component_start |
No |
Boolean |
Explanation Time when the bootstrap action script is executed. Currently, the following two options are available: Before component start and After component start Constraints N/A Value range
Default value false |
fail_action |
Yes |
String |
Explanation Whether to continue executing subsequent scripts and creating a cluster after the bootstrap action script fails to be executed. You are advised to set this parameter to continue in the commissioning phase so that the cluster can continue to be installed and started no matter whether the bootstrap action is successful. Constraints N/A Value range
Default value N/A |
start_time |
No |
Long |
Explanation Execution time of one boot operation script. Constraints N/A Value range N/A Default value N/A |
state |
No |
String |
Explanation The running status of one bootstrap action script. Constraints N/A Value range
Default value N/A |
action_stages |
No |
Array of strings |
Explanation Select the time when the bootstrap action script is executed. Constraints Enumerated values:
|
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
job_type |
Yes |
Integer |
Explanation Job type code Constraints N/A Value range
Default value N/A |
job_name |
Yes |
String |
Explanation Job name Constraints N/A Value range The value can contain 1 to 64 characters, including only letters, digits, underscores (_), and hyphens (-). Identical job names are allowed but not recommended. Default value N/A |
jar_path |
No |
String |
Explanation Path of the .jar file or .sql file to be executed. Constraints N/A Value range
Default value N/A |
arguments |
No |
String |
Explanation Key parameter for program execution. The parameter is specified by the function of the user's program. MRS is only responsible for loading the parameter. Constraints N/A Value range The parameter can contain 0 to 150,000 characters, but special characters (;|&>'<$) are not allowed. Default value N/A |
input |
No |
String |
Explanation Address for inputting data.
Files can be stored in HDFS or OBS. The path varies depending on the file system.
Constraints N/A Value range The parameter contains a maximum of 1023 characters, excluding special characters such as ;|&>'<$, and can be left blank. Default value N/A |
output |
No |
String |
Explanation Address for outputting data.
Files can be stored in HDFS or OBS. The path varies depending on the file system.
If the specified path does not exist, the system will automatically create it. Constraints N/A Value range The parameter contains a maximum of 1023 characters, excluding special characters such as ;|&>'<$, and can be left blank. Default value N/A |
job_log |
No |
String |
Explanation Path for storing job logs that record job running status.
Files can be stored in HDFS or OBS. The path varies depending on the file system.
Constraints N/A Value range The parameter contains a maximum of 1023 characters, excluding special characters such as ;|&>'<$, and can be left blank. Default value N/A |
shutdown_cluster |
No |
Boolean |
Explanation Whether to delete the cluster after the job execution is complete. Constraints N/A Value range
Default value N/A |
file_action |
No |
String |
Explanation The action to be performed on a file. Constraints N/A Value range
Default value N/A |
submit_job_once_cluster_run |
Yes |
Boolean |
Explanation Whether to submit a job when creating a cluster. Set this parameter to true in this example. Constraints N/A Value range
Default value N/A |
hql |
No |
String |
Explanation HiveQL statement Constraints N/A Value range N/A Default value N/A |
hive_script_path |
No |
String |
Explanation SQL program path. This parameter is needed by Spark Script and Hive Script jobs only, Constraints N/A Value range
Default value N/A |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
auto_scaling_enable |
Yes |
Boolean |
Explanation Whether to enable the auto scaling rule. Constraints N/A Value range
Default value N/A |
min_capacity |
Yes |
Integer |
Explanation Minimum number of nodes left in the node group. Constraints N/A Value range 0-500 Default value N/A |
max_capacity |
Yes |
Integer |
Explanation Maximum number of nodes in the node group. Constraints N/A Value range 0-500 Default value N/A |
resources_plans |
No |
Array of resources_plan objects |
Explanation Resource plan list. For details, see Table 12. If this parameter is left blank, the resource plan is disabled. Constraints When auto scaling is enabled, either a resource plan or an auto scaling rule must be configured. |
exec_scripts |
No |
Array of scale_script objects |
Explanation List of custom scaling automation scripts. For details, see Table 13. If this parameter is left blank, a hook script is disabled. This parameter is not available in the V2 API for creating and updating autoscaling policies. Constraints The number of records cannot exceed 10. |
rules |
No |
Array of rules objects |
Explanation List of auto scaling rules. For details, see Table 14. Constraints When auto scaling is enabled, either a resource plan or an auto scaling rule must be configured. The number of records cannot exceed 10. |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
period_type |
Yes |
String |
Explanation Cycle type of a resource plan. This parameter can be set to daily only. Constraints N/A Value range N/A Default value N/A |
start_time |
Yes |
String |
Explanation The start time of a resource plan. The value is in the format of hour:minute, indicating that the time ranges from 00:00 to 23:59. Constraints N/A Value range N/A Default value N/A |
end_time |
Yes |
String |
Explanation End time of a resource plan. The format is the same as that of start_time. Constraints The value cannot be earlier than the start_time, and the interval between start_time and start_time cannot be less than 30 minutes. Value range N/A Default value N/A |
min_capacity |
Yes |
Integer |
Explanation Minimum number of the preserved nodes in a node group in a resource plan. Constraints N/A Value range 0-500 Default value N/A |
max_capacity |
Yes |
Integer |
Explanation Maximum number of the preserved nodes in a node group in a resource plan. Constraints N/A Value range 0-500 Default value N/A |
effective_days |
No |
Array of strings |
Explanation The effective date of a resource plan. If this parameter is left blank, it indicates that the resource plan takes effect every day. The options are as follows: MONDAY, TUESDAY, WEDNESDAY, THURSDAY, FRIDAY, SATURDAY, and SUNDAY Constraints N/A |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
name |
Yes |
String |
Explanation Names of custom scaling automation scripts. Constraints N/A Value range The names in the same cluster must be unique. The value can contain only digits, letters, spaces, hyphens (-), and underscores (_) and must not start with a space. The value can contain 1 to 64 characters. Default value N/A |
uri |
Yes |
String |
Explanation Path of a custom automation script. Set this parameter to an OBS bucket path or a local VM path.
Constraints N/A Value range N/A Default value N/A |
parameters |
No |
String |
Explanation Parameters of a custom automation script.
Constraints N/A Value range N/A Default value N/A |
nodes |
Yes |
List<String> |
Explanation Type of a node where the custom automation script is executed. The node type can be Master, Core, or Task. Constraints N/A |
active_master |
No |
Boolean |
Explanation Whether the custom automation script runs only on the active master node. Constraints N/A Value range
Default value false |
action_stage |
Yes |
String |
Explanation Time when a script is executed. Constraints N/A Value range
Default value N/A |
fail_action |
Yes |
String |
Explanation Whether to continue to execute subsequent scripts and create a cluster after the custom automation script fails to be executed. You are advised to set this parameter to continue in the commissioning phase so the cluster can continue to be installed and started no matter whether the custom automation script is executed successfully. Constraints The scale-in operation cannot be undone. fail_action must be set to continue for the scripts that are executed after scale-in. Value range
Default value N/A |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
name |
Yes |
String |
Explanation Name of an auto scaling rule. Constraints N/A Value range The value can contain 1 to 64 characters, including only letters, digits, underscores (_), and hyphens (-). Rule names must be unique in a node group. Default value N/A |
description |
No |
String |
Explanation Description about an auto scaling rule. Constraints N/A Value range The value can contain 0 to 1024 characters. Default value N/A |
adjustment_type |
Yes |
String |
Explanation Adjustment type of an auto scaling rule. Constraints N/A Value range
Default value N/A |
cool_down_minutes |
Yes |
Integer |
Explanation Cluster cooling time after an auto scaling rule is triggered, when no auto scaling operation is performed. The unit is minute. Constraints N/A Value range The value ranges from 0 to 10080. 10080 indicates the number of minutes in a week. Default value N/A |
scaling_adjustment |
Yes |
Integer |
Explanation Number of nodes that can be adjusted once. Constraints N/A Value range 1-100 Default value N/A |
trigger |
Yes |
Trigger object |
Explanation Condition for triggering a rule. For details, see Table 15. Constraints N/A |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
metric_name |
Yes |
String |
Explanation Metric name. This triggering condition makes a judgment according to the value of the metric. Constraints N/A Value range The value can contain 0 to 64 characters. Default value N/A |
metric_value |
Yes |
String |
Explanation Metric threshold to trigger a rule. The value must be an integer or a number with two decimal places. Constraints N/A Value range Only integers or numbers with two decimal places are allowed. Default value N/A |
comparison_operator |
No |
String |
Explanation Metric judgment logic operator. Constraints N/A Value range
Default value N/A |
evaluation_periods |
Yes |
Integer |
Explanation Number of consecutive five-minute periods, during which a metric threshold is reached Constraints N/A Value range 1-288 Default value N/A |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
component_name |
Yes |
String |
Explanation Component name Constraints N/A Value range N/A Default value N/A |
configs |
No |
Array of Config objects |
Explanation The component configuration item list. For details about this parameter, see Table 17. Constraints The number of records cannot exceed 100. |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
key |
Yes |
String |
Explanation The configuration name. Only the configuration names displayed on the MRS component configuration page are supported. Constraints N/A Value range N/A Default value N/A |
value |
Yes |
String |
Explanation Configuration value Constraints N/A Value range N/A Default value N/A |
config_file_name |
Yes |
String |
Explanation Configuration file name. Only the file names displayed on the MRS component configuration page are supported. Constraints N/A Value range N/A Default value N/A |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
topic_urn |
No |
String |
Explanation SMN topic URN. Constraints Mandatory if alarm subscription needs to be enabled. Value range N/A Default value N/A |
subscription_name |
No |
String |
Explanation Subscription rule name Constraints N/A Value range N/A Default value default_alert_rule |
Response Parameters
Status code: 200
Parameter |
Type |
Description |
---|---|---|
cluster_id |
String |
Explanation Cluster ID, which is returned by the system after the cluster is created. Value range N/A |
Example Request
- Create an MRS 3.2.0-LTS.1 cluster for analysis. There are a Master node group with two nodes, a Core node group with three nodes, and a Task node group with three nodes. Autoscaling is enabled from 12:00 to 13:00 every Monday.
POST /v2/{project_id}/clusters { "cluster_version" : "MRS 3.2.0-LTS.1", "cluster_name" : "mrs_DyJA_dm", "cluster_type" : "ANALYSIS", "charge_info" : { "charge_mode" : "postPaid" }, "region" : "", "availability_zone" : "", "vpc_name" : "vpc-37cd", "subnet_id" : "1f8c5ca6-1f66-4096-bb00-baf175954f6e", "subnet_name" : "subnet", "components" : "Hadoop,Spark2x,HBase,Hive,Hue,Flink,Oozie,Ranger,Tez", "safe_mode" : "KERBEROS", "manager_admin_password" : "your password", "login_mode" : "PASSWORD", "node_root_password" : "your password", "log_collection" : 1, "mrs_ecs_default_agency" : "MRS_ECS_DEFAULT_AGENCY", "tags" : [ { "key" : "tag1", "value" : "111" }, { "key" : "tag2", "value" : "222" } ], "node_groups" : [ { "group_name" : "master_node_default_group", "node_num" : 2, "node_size" : "rc3.4xlarge.4.linux.bigdata", "root_volume" : { "type" : "SAS", "size" : 480 }, "data_volume" : { "type" : "SAS", "size" : 600 }, "data_volume_count" : 1 }, { "group_name" : "core_node_analysis_group", "node_num" : 3, "node_size" : "rc3.4xlarge.4.linux.bigdata", "root_volume" : { "type" : "SAS", "size" : 480 }, "data_volume" : { "type" : "SAS", "size" : 600 }, "data_volume_count" : 1 }, { "group_name" : "task_node_analysis_group", "node_num" : 3, "node_size" : "rc3.4xlarge.4.linux.bigdata", "root_volume" : { "type" : "SAS", "size" : 480 }, "data_volume" : { "type" : "SAS", "size" : 600 }, "data_volume_count" : 1, "auto_scaling_policy" : { "auto_scaling_enable" : true, "min_capacity" : 0, "max_capacity" : 1, "resources_plans" : [ { "period_type" : "daily", "start_time" : "12:00", "end_time" : "13:00", "min_capacity" : 2, "max_capacity" : 3, "effective_days" : [ "MONDAY" ] } ], "exec_scripts" : [ { "name" : "test", "uri" : "s3a://obs-mrstest/bootstrap/basic_success.sh", "parameters" : "", "nodes" : [ "master_node_default_group", "core_node_analysis_group", "task_node_analysis_group" ], "active_master" : false, "action_stage" : "before_scale_out", "fail_action" : "continue" } ], "rules" : [ { "name" : "default-expand-1", "description" : "", "adjustment_type" : "scale_out", "cool_down_minutes" : 5, "scaling_adjustment" : "1", "trigger" : { "metric_name" : "YARNAppRunning", "metric_value" : 100, "comparison_operator" : "GTOE", "evaluation_periods" : "1" } } ] } } ] }
- Create an MRS 3.2.0-LTS.1 cluster for stream analysis. There are a Master node group with two nodes, a Core node group with three nodes, and a Task node group with no node. Autoscaling is enabled from 12:00 to 13:00 every Monday.
POST /v2/{project_id}/clusters { "cluster_version" : "MRS 3.2.0-LTS.1", "cluster_name" : "mrs_Dokle_dm", "cluster_type" : "STREAMING", "charge_info" : { "charge_mode" : "postPaid" }, "region" : "", "availability_zone" : "", "vpc_name" : "vpc-37cd", "subnet_id" : "1f8c5ca6-1f66-4096-bb00-baf175954f6e", "subnet_name" : "subnet", "components" : "Storm,Kafka,Flume,Ranger", "safe_mode" : "KERBEROS", "manager_admin_password" : "your password", "login_mode" : "PASSWORD", "node_root_password" : "your password", "log_collection" : 1, "mrs_ecs_default_agency" : "MRS_ECS_DEFAULT_AGENCY", "tags" : [ { "key" : "tag1", "value" : "111" }, { "key" : "tag2", "value" : "222" } ], "node_groups" : [ { "group_name" : "master_node_default_group", "node_num" : 2, "node_size" : "rc3.4xlarge.4.linux.bigdata", "root_volume" : { "type" : "SAS", "size" : 480 }, "data_volume" : { "type" : "SAS", "size" : 600 }, "data_volume_count" : 1 }, { "group_name" : "core_node_streaming_group", "node_num" : 3, "node_size" : "rc3.4xlarge.4.linux.bigdata", "root_volume" : { "type" : "SAS", "size" : 480 }, "data_volume" : { "type" : "SAS", "size" : 600 }, "data_volume_count" : 1 }, { "group_name" : "task_node_streaming_group", "node_num" : 0, "node_size" : "rc3.4xlarge.4.linux.bigdata", "root_volume" : { "type" : "SAS", "size" : 480 }, "data_volume" : { "type" : "SAS", "size" : 600 }, "data_volume_count" : 1, "auto_scaling_policy" : { "auto_scaling_enable" : true, "min_capacity" : 0, "max_capacity" : 1, "resources_plans" : [ { "period_type" : "daily", "start_time" : "12:00", "end_time" : "13:00", "min_capacity" : 2, "max_capacity" : 3, "effective_days" : [ "MONDAY" ] } ], "rules" : [ { "name" : "default-expand-1", "description" : "", "adjustment_type" : "scale_out", "cool_down_minutes" : 5, "scaling_adjustment" : "1", "trigger" : { "metric_name" : "StormSlotAvailablePercentage", "metric_value" : 100, "comparison_operator" : "LTOE", "evaluation_periods" : "1" } } ] } } ] }
- Create an MRS 3.2.0-LTS.1 cluster for hybrid analysis. There are a Master node group with two nodes, two Core node groups with three nodes in each, and two Task node groups with three nodes in one group and one node in the other group.
POST /v2/{project_id}/clusters { "cluster_version" : "MRS 3.2.0-LTS.1", "cluster_name" : "mrs_onmm_dm", "cluster_type" : "MIXED", "charge_info" : { "charge_mode" : "postPaid" }, "region" : "", "availability_zone" : "", "vpc_name" : "vpc-37cd", "subnet_id" : "1f8c5ca6-1f66-4096-bb00-baf175954f6e", "subnet_name" : "subnet", "components" : "Hadoop,Spark2x,HBase,Hive,Hue,Loader,Kafka,Storm,Flume,Flink,Oozie,Ranger,Tez", "safe_mode" : "KERBEROS", "manager_admin_password" : "your password", "login_mode" : "PASSWORD", "node_root_password" : "your password", "log_collection" : 1, "mrs_ecs_default_agency" : "MRS_ECS_DEFAULT_AGENCY", "tags" : [ { "key" : "tag1", "value" : "111" }, { "key" : "tag2", "value" : "222" } ], "node_groups" : [ { "group_name" : "master_node_default_group", "node_num" : 2, "node_size" : "Sit3.4xlarge.4.linux.bigdata", "root_volume" : { "type" : "SAS", "size" : 480 }, "data_volume" : { "type" : "SAS", "size" : 600 }, "data_volume_count" : 1 }, { "group_name" : "core_node_streaming_group", "node_num" : 3, "node_size" : "Sit3.4xlarge.4.linux.bigdata", "root_volume" : { "type" : "SAS", "size" : 480 }, "data_volume" : { "type" : "SAS", "size" : 600 }, "data_volume_count" : 1 }, { "group_name" : "core_node_analysis_group", "node_num" : 3, "node_size" : "Sit3.4xlarge.4.linux.bigdata", "root_volume" : { "type" : "SAS", "size" : 480 }, "data_volume" : { "type" : "SAS", "size" : 600 }, "data_volume_count" : 1 }, { "group_name" : "task_node_analysis_group", "node_num" : 1, "node_size" : "Sit3.4xlarge.4.linux.bigdata", "root_volume" : { "type" : "SAS", "size" : 480 }, "data_volume" : { "type" : "SAS", "size" : 600 }, "data_volume_count" : 1 }, { "group_name" : "task_node_streaming_group", "node_num" : 0, "node_size" : "Sit3.4xlarge.4.linux.bigdata", "root_volume" : { "type" : "SAS", "size" : 480 }, "data_volume" : { "type" : "SAS", "size" : 600 }, "data_volume_count" : 1 } ] }
- Create a cluster where custom management nodes and control nodes are the same nodes. The cluster version is MRS 3.2.0-LTS.1. There is a Master node group with three nodes, two Core node groups with three nodes in one group and one node in the other group.
POST /v2/{project_id}/clusters { "cluster_version" : "MRS 3.2.0-LTS.1", "cluster_name" : "mrs_heshe_dm", "cluster_type" : "CUSTOM", "charge_info" : { "charge_mode" : "postPaid" }, "region" : "", "availability_zone" : "", "vpc_name" : "vpc-37cd", "subnet_id" : "1f8c5ca6-1f66-4096-bb00-baf175954f6e", "subnet_name" : "subnet", "components" : "Hadoop,Spark2x,HBase,Hive,Hue,Kafka,Flume,Flink,Oozie,HetuEngine,Ranger,Tez,ZooKeeper,ClickHouse", "safe_mode" : "KERBEROS", "manager_admin_password" : "your password", "login_mode" : "PASSWORD", "node_root_password" : "your password", "mrs_ecs_default_agency" : "MRS_ECS_DEFAULT_AGENCY", "template_id" : "mgmt_control_combined_v2", "log_collection" : 1, "tags" : [ { "key" : "tag1", "value" : "111" }, { "key" : "tag2", "value" : "222" } ], "node_groups" : [ { "group_name" : "master_node_default_group", "node_num" : 3, "node_size" : "Sit3.4xlarge.4.linux.bigdata", "root_volume" : { "type" : "SAS", "size" : 480 }, "data_volume" : { "type" : "SAS", "size" : 600 }, "data_volume_count" : 1, "assigned_roles" : [ "OMSServer:1,2", "SlapdServer:1,2", "KerberosServer:1,2", "KerberosAdmin:1,2", "quorumpeer:1,2,3", "NameNode:2,3", "Zkfc:2,3", "JournalNode:1,2,3", "ResourceManager:2,3", "JobHistoryServer:2,3", "DBServer:1,3", "Hue:1,3", "LoaderServer:1,3", "MetaStore:1,2,3", "WebHCat:1,2,3", "HiveServer:1,2,3", "HMaster:2,3", "MonitorServer:1,2", "Nimbus:1,2", "UI:1,2", "JDBCServer2x:1,2,3", "JobHistory2x:2,3", "SparkResource2x:1,2,3", "oozie:2,3", "LoadBalancer:2,3", "TezUI:1,3", "TimelineServer:3", "RangerAdmin:1,2", "UserSync:2", "TagSync:2", "KerberosClient", "SlapdClient", "meta", "HSConsole:2,3", "FlinkResource:1,2,3", "DataNode:1,2,3", "NodeManager:1,2,3", "IndexServer2x:1,2", "ThriftServer:1,2,3", "RegionServer:1,2,3", "ThriftServer1:1,2,3", "RESTServer:1,2,3", "Broker:1,2,3", "Supervisor:1,2,3", "Logviewer:1,2,3", "Flume:1,2,3", "HSBroker:1,2,3" ] }, { "group_name" : "node_group_1", "node_num" : 3, "node_size" : "Sit3.4xlarge.4.linux.bigdata", "root_volume" : { "type" : "SAS", "size" : 480 }, "data_volume" : { "type" : "SAS", "size" : 600 }, "data_volume_count" : 1, "assigned_roles" : [ "DataNode", "NodeManager", "RegionServer", "Flume:1", "Broker", "Supervisor", "Logviewer", "HBaseIndexer", "KerberosClient", "SlapdClient", "meta", "HSBroker:1,2", "ThriftServer", "ThriftServer1", "RESTServer", "FlinkResource" ] }, { "group_name" : "node_group_2", "node_num" : 1, "node_size" : "Sit3.4xlarge.4.linux.bigdata", "root_volume" : { "type" : "SAS", "size" : 480 }, "data_volume" : { "type" : "SAS", "size" : 600 }, "data_volume_count" : 1, "assigned_roles" : [ "NodeManager", "KerberosClient", "SlapdClient", "meta", "FlinkResource" ] } ] }
- Create a cluster where custom management nodes and control nodes are independent nodes. The cluster version is MRS 3.2.0-LTS.1. There a Master node group with five nodes and a Core node group with three nodes.
POST /v2/{project_id}/clusters { "cluster_version" : "MRS 3.2.0-LTS.1", "cluster_name" : "mrs_jdRU_dm01", "cluster_type" : "CUSTOM", "charge_info" : { "charge_mode" : "postPaid" }, "region" : "", "availability_zone" : "", "vpc_name" : "vpc-37cd", "subnet_id" : "1f8c5ca6-1f66-4096-bb00-baf175954f6e", "subnet_name" : "subnet", "components" : "Hadoop,Spark2x,HBase,Hive,Hue,Kafka,Flume,Flink,Oozie,HetuEngine,Ranger,Tez,Ranger,Tez,ZooKeeper,ClickHouse", "safe_mode" : "KERBEROS", "manager_admin_password" : "your password", "login_mode" : "PASSWORD", "node_root_password" : "your password", "mrs_ecs_default_agency" : "MRS_ECS_DEFAULT_AGENCY", "log_collection" : 1, "template_id" : "mgmt_control_separated_v2", "tags" : [ { "key" : "aaa", "value" : "111" }, { "key" : "bbb", "value" : "222" } ], "node_groups" : [ { "group_name" : "master_node_default_group", "node_num" : 5, "node_size" : "rc3.4xlarge.4.linux.bigdata", "root_volume" : { "type" : "SAS", "size" : 480 }, "data_volume" : { "type" : "SAS", "size" : 600 }, "data_volume_count" : 1, "assigned_roles" : [ "OMSServer:1,2", "SlapdServer:3,4", "KerberosServer:3,4", "KerberosAdmin:3,4", "quorumpeer:3,4,5", "NameNode:4,5", "Zkfc:4,5", "JournalNode:1,2,3,4,5", "ResourceManager:4,5", "JobHistoryServer:4,5", "DBServer:3,5", "Hue:1,2", "LoaderServer:1,2", "MetaStore:1,2,3,4,5", "WebHCat:1,2,3,4,5", "HiveServer:1,2,3,4,5", "HMaster:4,5", "MonitorServer:1,2", "Nimbus:1,2", "UI:1,2", "JDBCServer2x:1,2,3,4,5", "JobHistory2x:4,5", "SparkResource2x:1,2,3,4,5", "oozie:1,2", "LoadBalancer:1,2", "TezUI:1,2", "TimelineServer:5", "RangerAdmin:1,2", "KerberosClient", "SlapdClient", "meta", "HSConsole:1,2", "FlinkResource:1,2,3,4,5", "DataNode:1,2,3,4,5", "NodeManager:1,2,3,4,5", "IndexServer2x:1,2", "ThriftServer:1,2,3,4,5", "RegionServer:1,2,3,4,5", "ThriftServer1:1,2,3,4,5", "RESTServer:1,2,3,4,5", "Broker:1,2,3,4,5", "Supervisor:1,2,3,4,5", "Logviewer:1,2,3,4,5", "Flume:1,2,3,4,5", "HBaseIndexer:1,2,3,4,5", "TagSync:1", "UserSync:1" ] }, { "group_name" : "node_group_1", "node_num" : 3, "node_size" : "rc3.4xlarge.4.linux.bigdata", "root_volume" : { "type" : "SAS", "size" : 480 }, "data_volume" : { "type" : "SAS", "size" : 600 }, "data_volume_count" : 1, "assigned_roles" : [ "DataNode", "NodeManager", "RegionServer", "Flume:1", "Broker", "Supervisor", "Logviewer", "HBaseIndexer", "KerberosClient", "SlapdClient", "meta", "HSBroker:1,2", "ThriftServer", "ThriftServer1", "RESTServer", "FlinkResource" ] } ] }
- Create a cluster where data nodes are deployed independently from other nodes. The cluster version is MRS 3.2.0-LTS.1. There are a Master node group with nine nodes, four Core node groups with three nodes in each group.
POST /v2/{project_id}/clusters { "cluster_version" : "MRS 3.2.0-LTS.1", "cluster_name" : "mrs_jdRU_dm02", "cluster_type" : "CUSTOM", "charge_info" : { "charge_mode" : "postPaid" }, "region" : "", "availability_zone" : "", "vpc_name" : "vpc-37cd", "subnet_id" : "1f8c5ca6-1f66-4096-bb00-baf175954f6e", "subnet_name" : "subnet", "components" : "Hadoop,Spark2x,HBase,Hive,Hue,Kafka,Flume,Flink,Oozie,Ranger,Tez,Ranger,Tez,ZooKeeper,ClickHouse", "safe_mode" : "KERBEROS", "manager_admin_password" : "your password", "login_mode" : "PASSWORD", "node_root_password" : "your password", "mrs_ecs_default_agency" : "MRS_ECS_DEFAULT_AGENCY", "template_id" : "mgmt_control_data_separated_v2", "log_collection" : 1, "tags" : [ { "key" : "aaa", "value" : "111" }, { "key" : "bbb", "value" : "222" } ], "node_groups" : [ { "group_name" : "master_node_default_group", "node_num" : 9, "node_size" : "rc3.4xlarge.4.linux.bigdata", "root_volume" : { "type" : "SAS", "size" : 480 }, "data_volume" : { "type" : "SAS", "size" : 600 }, "data_volume_count" : 1, "assigned_roles" : [ "OMSServer:1,2", "SlapdServer:5,6", "KerberosServer:5,6", "KerberosAdmin:5,6", "quorumpeer:5,6,7,8,9", "NameNode:3,4", "Zkfc:3,4", "JournalNode:5,6,7", "ResourceManager:8,9", "JobHistoryServer:8", "DBServer:8,9", "Hue:8,9", "FlinkResource:3,4", "LoaderServer:3,5", "MetaStore:8,9", "WebHCat:5", "HiveServer:8,9", "HMaster:8,9", "FTP-Server:3,4", "MonitorServer:3,4", "Nimbus:8,9", "UI:8,9", "JDBCServer2x:8,9", "JobHistory2x:8,9", "SparkResource2x:5,6,7", "oozie:4,5", "EsMaster:7,8,9", "LoadBalancer:8,9", "TezUI:5,6", "TimelineServer:5", "RangerAdmin:4,5", "UserSync:5", "TagSync:5", "KerberosClient", "SlapdClient", "meta", "HSBroker:5", "HSConsole:3,4", "FlinkResource:3,4" ] }, { "group_name" : "node_group_1", "node_num" : 3, "node_size" : "rc3.4xlarge.4.linux.bigdata", "root_volume" : { "type" : "SAS", "size" : 480 }, "data_volume" : { "type" : "SAS", "size" : 600 }, "data_volume_count" : 1, "assigned_roles" : [ "DataNode", "NodeManager", "RegionServer", "Flume:1", "GraphServer", "KerberosClient", "SlapdClient", "meta", "HSBroker:1,2" ] }, { "group_name" : "node_group_2", "node_num" : 3, "node_size" : "rc3.4xlarge.4.linux.bigdata", "root_volume" : { "type" : "SAS", "size" : 480 }, "data_volume" : { "type" : "SAS", "size" : 600 }, "data_volume_count" : 1, "assigned_roles" : [ "HBaseIndexer", "SolrServer[3]", "EsNode[2]", "KerberosClient", "SlapdClient", "meta", "SolrServerAdmin:1,2" ] }, { "group_name" : "node_group_3", "node_num" : 3, "node_size" : "rc3.4xlarge.4.linux.bigdata", "root_volume" : { "type" : "SAS", "size" : 480 }, "data_volume" : { "type" : "SAS", "size" : 600 }, "data_volume_count" : 1, "assigned_roles" : [ "Redis[2]", "KerberosClient", "SlapdClient", "meta" ] }, { "group_name" : "node_group_4", "node_num" : 3, "node_size" : "rc3.4xlarge.4.linux.bigdata", "root_volume" : { "type" : "SAS", "size" : 480 }, "data_volume" : { "type" : "SAS", "size" : 600 }, "data_volume_count" : 1, "assigned_roles" : [ "Broker", "Supervisor", "Logviewer", "KerberosClient", "SlapdClient", "meta" ] } ] }
Example Response
- Example of a successful response
{ "cluster_id": "da1592c2-bb7e-468d-9ac9-83246e95447a" }
- Example of a failed response
{ "error_code": "MRS.0002", "error_msg": "The parameter is invalid." }
Status Codes
See Status Codes.
Error Codes
See Error Codes.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot