Creating Clusters
Function
This API is used to create an MRS cluster.
Before using the API, you need to obtain the resources listed in Table 1.
Resource |
How to Obtain |
---|---|
VPC |
See operation instructions in VPC > Querying VPCs and VPC > Creating a VPC in the VPC API Reference. |
Subnet |
See operation instructions in Subnet > Querying Subnets and Subnet > Creating a Subnet in the VPC API Reference. |
Key Pair |
See operation instructions in ECS SSH Key Management > Querying SSH Key Pairs and ECS SSH Key Management > Creating and Importing an SSH Key Pair in the ECS API Reference. |
Zone |
Obtain the region and AZ information by referring to Regions and Endpoints. |
Version |
Currently, MRS 1.8.9, 2.0.1, 3.1.0-LTS.1, and 3.1.2-LTS.3 are supported. |
Component |
|
URI
- Format
- Parameters
Table 2 URI parameter Parameter
Mandatory
Description
project_id
Yes
Project ID. For details on how to obtain the project ID, see Obtaining a Project ID.
Request
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
cluster_version |
Yes |
String |
Cluster version. Possible values are as follows:
|
cluster_name |
Yes |
String |
Cluster name. It must be unique. A cluster name can contain only 2 to 64 characters. Only letters, digits, hyphens (-), and underscores (_) are allowed. |
cluster_type |
Yes |
String |
Cluster type. The options are as follows:
|
region |
Yes |
String |
Region of the cluster. Obtain it by referring to Regions and Endpoints. |
vpc_name |
Yes |
String |
Name of the VPC where the subnet locates. Perform the following operations to obtain the VPC name from the VPC management console:
On the Virtual Private Cloud page, obtain the VPC name from the list. |
subnet_name |
Yes |
String |
Subnet name. Perform the following operations to obtain the subnet name from the VPC management console:
On the Virtual Private Cloud page, obtain the subnet name of the VPC from the list. |
components |
Yes |
String |
List of component names, which are separated by commas (,). For details about the component names, see the component list of each version in Table 4-1. |
availability_zone |
Yes |
String |
AZ name. Multi-AZ clusters are not supported. AZ. Obtain it by referring to Regions and Endpoints. |
security_groups_id |
No |
String |
Security group ID of the cluster.
|
safe_mode |
Yes |
String |
Running mode of an MRS cluster.
|
manager_admin_password |
Yes |
String |
Password of the MRS Manager administrator. The password must meet the following requirements:
|
login_mode |
Yes |
String |
Node login mode.
|
node_root_password |
No |
String |
Password of user root for logging in to a cluster node. A password must meet the following requirements:
|
node_keypair_name |
No |
String |
Name of a key pair You can use a key pair to log in to the Master node in the cluster. |
eip_address |
No |
String |
An EIP bound to an MRS cluster can be used to access MRS Manager. The EIP must have been created and must be in the same region as the cluster. |
eip_id |
No |
String |
ID of the bound EIP. This parameter is mandatory when eip_address is configured. To obtain the EIP ID, log in to the VPC console, choose Network > Elastic IP and Bandwidth > Elastic IP, click the EIP to be bound, and obtain the ID in the Basic Information area. |
mrs_ecs_default_agency |
No |
String |
Name of the agency bound to a cluster node by default. The value is fixed to MRS_ECS_DEFAULT_AGENCY. An agency allows ECS or BMS to manage MRS resources. You can configure an agency of the ECS type to automatically obtain the AK/SK to access OBS. The MRS_ECS_DEFAULT_AGENCY agency has the OBS OperateAccess permission of OBS and the CES FullAccess (for users who have enabled fine-grained policies), CES Administrator, and KMS Administrator permissions in the region where the cluster is located. |
template_id |
No |
String |
Template used for node deployment when the cluster type is CUSTOM.
|
tags |
No |
Array of Tag |
Cluster tag For more parameter description, see Table 4. A maximum of 10 tags can be added to a cluster. |
log_collection |
No |
Integer |
Specifies whether to collect logs when cluster creation fails:
The default value is 1, indicating that OBS buckets will be created and only used to collect logs that record MRS cluster creation failures. |
node_groups |
Yes |
Array of NodeGroup |
Information about the node groups in the cluster. For details about the parameters, see Table 5. |
bootstrap_scripts |
No |
Array of BootstrapScript |
Bootstrap action script information. For more parameter description, see Table 7. MRS 3.x does not support this parameter. |
add_jobs |
No |
Array of AddJobReq |
Jobs can be submitted when a cluster is created. Currently, only one job can be created. For details about job parameters, see Table 8. MRS 3.x does not support this parameter. |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
key |
Yes |
String |
Tag key.
|
value |
Yes |
String |
Value.
|
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
group_name |
Yes |
String |
Node group name. The value can contain a maximum of 64 characters, including uppercase and lowercase letters, digits and underscores (_). The rules for configuring node groups are as follows:
|
node_num |
Yes |
Integer |
Number of nodes. The value ranges from 0 to 500. The maximum number of Core and Task nodes is 500. |
node_size |
Yes |
String |
Instance specifications of a node. Example: c6.4xlarge.4.linux.mrs MRS supports host specifications determined by CPU, memory, and disk space. You are advised to obtain the value of this parameter from the cluster creation page on the MRS console. |
root_volume |
No |
Volume |
System disk information of the node. This parameter is optional for some VMs or the system disk of the BMS. This parameter is mandatory in other cases. For details about the parameter description, see Table 6. |
data_volume |
No |
Volume |
Data disk information. This parameter is mandatory when data_volume_count is not 0. |
data_volume_count |
No |
Integer |
Number of data disks of a node. Value range: 0 to 10 |
auto_scaling_policy |
No |
AutoScalingPolicy |
Autoscaling rule corresponding to the node group. For details about the parameters, see Table 9. |
assigned_roles |
No |
Array String |
This parameter is mandatory when the cluster type is CUSTOM. You can specify the roles deployed in a node group. This parameter is a character string array. Each character string represents a role expression. Role expression definition:
For details about available roles, see Roles and components supported by MRS. |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
type |
Yes |
String |
Disk type. The following disk types are supported:
|
size |
Yes |
Integer |
Specifies the data disk size, in GB. The value range is 10 to 32768. |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
name |
Yes |
String |
Name of a bootstrap action script. It must be unique in a cluster. The value can contain only digits, letters, spaces, hyphens (-), and underscores (_) and must not start with a space. The value can contain 1 to 64 characters. |
uri |
Yes |
String |
Path of a bootstrap action script. Set this parameter to an OBS bucket path or a local VM path.
|
parameters |
No |
String |
Bootstrap action script parameters. |
nodes |
Yes |
Array String |
Type of a node where the bootstrap action script is executed. The value can be Master, Core, or Task. |
active_master |
No |
Boolean |
Whether the bootstrap action script runs only on active Master nodes. The default value is false, indicating that the bootstrap action script can run on all Master nodes. |
before_component_start |
No |
Boolean |
Time when the bootstrap action script is executed. Currently, the following two options are available: Before component start and After component start The default value is false, indicating that the bootstrap action script is executed after the component is started. |
fail_action |
Yes |
String |
Whether to continue executing subsequent scripts and creating a cluster after the bootstrap action script fails to be executed.
The default value is errorout, indicating that the action is stopped.
NOTE:
You are advised to set this parameter to continue in the commissioning phase so that the cluster can continue to be installed and started no matter whether the bootstrap action is successful. |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
job_type |
Yes |
Integer |
Job type code.
|
job_name |
Yes |
String |
Job name. It contains 1 to 64 characters. Only letters, digits, hyphens (-), and underscores (_) are allowed.
NOTE:
Identical job names are allowed but not recommended. |
jar_path |
No |
String |
Path of the JAR or SQL file for program execution. The parameter must meet the following requirements:
|
arguments |
No |
String |
Key parameter for program execution. The parameter is specified by the function of the user's program. MRS is only responsible for loading the parameter. The parameter contains a maximum of 2047 characters, excluding special characters such as ;|&>'<$, and can be left blank. |
input |
No |
String |
Address for inputting data.
Files can be stored in HDFS or OBS. The path varies depending on the file system.
The parameter contains a maximum of 1023 characters, excluding special characters such as ;|&>'<$, and can be left blank. |
output |
No |
String |
Address for outputting data.
Files can be stored in HDFS or OBS. The path varies depending on the file system.
If the specified path does not exist, the system will automatically create it. The parameter contains a maximum of 1023 characters, excluding special characters such as ;|&>'<$, and can be left blank. |
job_log |
No |
String |
Path for storing job logs that record job running status.
Files can be stored in HDFS or OBS. The path varies depending on the file system.
The parameter contains a maximum of 1023 characters, excluding special characters such as ;|&>'<$, and can be left blank. |
shutdown_cluster |
No |
Bool |
Whether to delete the cluster after the job execution is complete.
|
file_action |
No |
String |
Data import and export.
|
submit_job_once_cluster_run |
Yes |
Bool |
Set this parameter to true in this example. |
hql |
No |
String |
HiveQL statement |
hive_script_path |
Yes |
String |
SQL program path. This parameter is needed by Spark Script and Hive Script jobs only, and must meet the following requirements:
|
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
auto_scaling_enable |
Yes |
Boolean |
Whether to enable the auto scaling rule. |
min_capacity |
Yes |
Integer |
Minimum number of nodes left in the node group. Value range: 0 to 500 |
max_capacity |
Yes |
Integer |
Maximum number of nodes in the node group. Value range: 0 to 500 |
resources_plans |
No |
List |
Resource plan list. For details, see Table 10. If this parameter is left blank, the resource plan is disabled. When auto scaling is enabled, either a resource plan or an auto scaling rule must be configured. |
exec_scripts |
No |
List |
List of custom scaling automation scripts. For details, see Table 11. If this parameter is left blank, a hook script is disabled. |
rules |
No |
List |
List of auto scaling rules. For details, see Table 12. When auto scaling is enabled, either a resource plan or an auto scaling rule must be configured. |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
period_type |
Yes |
String |
Cycle type of a resource plan. Currently, only the following cycle type is supported:
|
start_time |
Yes |
String |
Start time of a resource plan. The value is in the format of hour:minute, indicating that the time ranges from 0:00 to 23:59. |
end_time |
Yes |
String |
End time of a resource plan. The value is in the same format as that of start_time. The interval between end_time and start_time must be greater than or equal to 30 minutes. |
min_capacity |
Yes |
Integer |
Minimum number of the preserved nodes in a node group in a resource plan. Value range: 0 to 500 |
max_capacity |
Yes |
Integer |
Maximum number of the preserved nodes in a node group in a resource plan. Value range: 0 to 500 |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
name |
Yes |
String |
Name of a custom automation script. It must be unique in a same cluster. The value can contain only digits, letters, spaces, hyphens (-), and underscores (_) and must not start with a space. The value can contain 1 to 64 characters. |
uri |
Yes |
String |
Path of a custom automation script. Set this parameter to an OBS bucket path or a local VM path.
|
parameters |
No |
String |
Parameters of a custom automation script.
|
nodes |
Yes |
List<String> |
Type of a node where the custom automation script is executed. The node type can be Master, Core, or Task. |
active_master |
No |
Boolean |
Whether the custom automation script runs only on the active Master node. The default value is false, indicating that the custom automation script can run on all Master nodes. |
action_stage |
Yes |
String |
Time when a script is executed. The following four options are supported:
|
fail_action |
Yes |
String |
Whether to continue to execute subsequent scripts and create a cluster after the custom automation script fails to be executed.
|
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
name |
Yes |
String |
Name of an auto scaling rule. A cluster name can contain only 1 to 64 characters. Only letters, digits, hyphens (-), and underscores (_) are allowed. Rule names must be unique in a node group. |
description |
No |
String |
Description about an auto scaling rule. It contains a maximum of 1024 characters. |
adjustment_type |
Yes |
String |
Auto scaling rule adjustment type. The options are as follows:
|
cool_down_minutes |
Yes |
Integer |
Cluster cooling time after an auto scaling rule is triggered, when no auto scaling operation is performed. The unit is minute. Value range: 0 to 10,080. One week is equal to 10,080 minutes. |
scaling_adjustment |
Yes |
Integer |
Number of nodes that can be adjusted once. Value range: 1 to 100 |
trigger |
Yes |
Trigger |
Condition for triggering a rule. For details, see Table 13. |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
metric_name |
Yes |
String |
Metric name. This triggering condition makes a judgment according to the value of the metric. A metric name contains a maximum of 64 characters. Table 14 lists the supported metric names. |
metric_value |
Yes |
String |
Metric threshold to trigger a rule. The parameter value must be an integer or number with two decimal places only. Table 14 provides value types and ranges corresponding to metric_name. |
comparison_operator |
No |
String |
Metric judgment logic operator. The options are as follows:
|
evaluation_periods |
Yes |
Integer |
Number of consecutive five-minute periods, during which a metric threshold is reached. Value range: 1 to 288 |
Cluster Type |
Metric |
Value Type |
Description |
---|---|---|---|
Streaming cluster |
StormSlotAvailable |
Integer |
Number of available Storm slots. Value range: 0 to 2147483646. |
StormSlotAvailablePercentage |
Percentage |
Percentage of available Storm slots, that is, the proportion of the available slots to total slots. Value range: 0 to 100. |
|
StormSlotUsed |
Integer |
Number of the used Storm slots. Value range: 0 to 2147483646. |
|
StormSlotUsedPercentage |
Percentage |
Percentage of the used Storm slots, that is, the proportion of the used slots to total slots. Value range: 0 to 100. |
|
StormSupervisorMemAverageUsage |
Integer |
Average memory usage of the Supervisor process of Storm. Value range: 0 to 2147483646. |
|
StormSupervisorMemAverageUsagePercentage |
Percentage |
Average percentage of the used memory of the Supervisor process of Storm to the total memory of the system. Value range: 0 to 100. |
|
StormSupervisorCPUAverageUsagePercentage |
Percentage |
Average percentage of the used CPUs of the Supervisor process of Storm to the total CPUs. Value range: 0 to 6000. |
|
Analysis cluster |
YARNAppPending |
Integer |
Number of pending tasks on Yarn. Value range: 0 to 2147483646. |
YARNAppPendingRatio |
Ratio |
Ratio of pending tasks on Yarn, that is, the ratio of pending tasks to running tasks on Yarn. Value range: 0 to 2147483646. |
|
YARNAppRunning |
Integer |
Number of running tasks on Yarn. Value range: 0 to 2147483646. |
|
YARNContainerAllocated |
Integer |
Number of containers allocated to Yarn. Value range: 0 to 2147483646. |
|
YARNContainerPending |
Integer |
Number of pending containers on Yarn. Value range: 0 to 2147483646. |
|
YARNContainerPendingRatio |
Ratio |
Ratio of pending containers on Yarn, that is, the ratio of pending containers to running containers on Yarn. Value range: 0 to 2147483646. |
|
YARNCPUAllocated |
Integer |
Number of virtual CPUs (vCPUs) allocated to Yarn. Value range: 0 to 2147483646. |
|
YARNCPUAvailable |
Integer |
Number of available vCPUs on Yarn. Value range: 0 to 2147483646. |
|
YARNCPUAvailablePercentage |
Percentage |
Percentage of available vCPUs on Yarn, that is, the proportion of available vCPUs to total vCPUs. Value range: 0 to 100. |
|
YARNCPUPending |
Integer |
Number of pending vCPUs on Yarn. Value range: 0 to 2147483646. |
|
YARNMemoryAllocated |
Integer |
Memory allocated to Yarn. The unit is MB. Value range: 0 to 2147483646. |
|
YARNMemoryAvailable |
Integer |
Available memory on Yarn. The unit is MB. Value range: 0 to 2147483646. |
|
YARNMemoryAvailablePercentage |
Percentage |
Percentage of available memory on Yarn, that is, the proportion of available memory to total memory on Yarn. Value range: 0 to 100. |
|
YARNMemoryPending |
Integer |
Pending memory on Yarn. Value range: 0 to 2147483646. |
When the value type is percentage or ratio in Table 14, the valid value can be accurate to percentile. The percentage metric value is a decimal value with a percent sign (%) removed. For example, 16.80 represents 16.80%.
Response message.
Parameter |
Type |
Description |
---|---|---|
cluster_id |
String |
Cluster ID, which is returned by the system after the cluster is created. |
Examples
- Request example
- Creating an Analysis Cluster
{ "cluster_version": "MRS 3.1.0-LTS.1", "cluster_name": "mrs_DyJA_dm", "cluster_type": "ANALYSIS", "charge_info": { "charge_mode": "postPaid" }, "region": "", "availability_zone": "", "vpc_name": "vpc-37cd", "subnet_name": "subnet-ed99", "components": "Hadoop,Spark2x,HBase,Hive,Hue,Loader,Flink,Oozie,HetuEngine,Ranger,Tez,ZooKeeper", "safe_mode": "KERBEROS", "manager_admin_password": "Mrs@1234", "login_mode": "PASSWORD", "node_root_password": "Mrs@1234", "log_collection": 1, "mrs_ecs_default_agency": "MRS_ECS_DEFAULT_AGENCY", "tags": [ { "key": "tag1", "value": "111" }, { "key": "tag2", "value": "222" } ], "node_groups": [ { "group_name": "master_node_default_group", "node_num": 2, "node_size": "rc3.4xlarge.4.linux.bigdata", "root_volume": { "type": "SAS", "size": 480 }, "data_volume": { "type": "SAS", "size": 600 }, "data_volume_count": 1 }, { "group_name": "core_node_analysis_group", "node_num": 3, "node_size": "rc3.4xlarge.4.linux.bigdata", "root_volume": { "type": "SAS", "size": 480 }, "data_volume": { "type": "SAS", "size": 600 }, "data_volume_count": 1 }, { "group_name": "task_node_analysis_group", "node_num": 3, "node_size": "rc3.4xlarge.4.linux.bigdata", "root_volume": { "type": "SAS", "size": 480 }, "data_volume": { "type": "SAS", "size": 600 }, "data_volume_count": 1, "auto_scaling_policy": { "auto_scaling_enable": true, "min_capacity": 0, "max_capacity": 1, "resources_plans": [], "exec_scripts": [], "rules": [ { "name": "default-expand-1", "description": "", "adjustment_type": "scale_out", "cool_down_minutes": 5, "scaling_adjustment": "1", "trigger": { "metric_id": 2003, "metric_name": "StormSlotAvailablePercentage", "metric_value": 100, "comparison_operator_id": 2003, "comparison_operator": "LTOE", "evaluation_periods": "1" } } ] } } ] }
- Creating a Streaming Cluster
{ "cluster_version": "MRS 3.1.0-LTS.1", "cluster_name": "mrs_Dokle_dm", "cluster_type": "STREAMING", "charge_info": { "charge_mode": "postPaid" }, "region": "", "availability_zone": "", "vpc_name": "vpc-37cd", "subnet_name": "subnet-ed99", "components": "Kafka,Flume,Ranger", "safe_mode": "KERBEROS", "manager_admin_password": "Mrs@1234", "login_mode": "PASSWORD", "node_root_password": "Mrs@1234", "log_collection": 1, "mrs_ecs_default_agency": "MRS_ECS_DEFAULT_AGENCY", "tags": [ { "key": "tag1", "value": "111" }, { "key": "tag2", "value": "222" } ], "node_groups": [ { "group_name": "master_node_default_group", "node_num": 2, "node_size": "rc3.4xlarge.4.linux.bigdata", "root_volume": { "type": "SAS", "size": 480 }, "data_volume": { "type": "SAS", "size": 600 }, "data_volume_count": 1 }, { "group_name": "core_node_streaming_group", "node_num": 3, "node_size": "rc3.4xlarge.4.linux.bigdata", "root_volume": { "type": "SAS", "size": 480 }, "data_volume": { "type": "SAS", "size": 600 }, "data_volume_count": 1, }, { "group_name": "task_node_streaming_group", "node_num": 0, "node_size": "rc3.4xlarge.4.linux.bigdata", "root_volume": { "type": "SAS", "size": 480 }, "data_volume": { "type": "SAS", "size": 600 }, "data_volume_count": 1, "auto_scaling_policy": { "auto_scaling_enable": true, "min_capacity": 0, "max_capacity": 1, "resources_plans": [], "exec_scripts": [], "rules": [ { "name": "default-expand-1", "description": "", "adjustment_type": "scale_out", "cool_down_minutes": 5, "scaling_adjustment": "1", "trigger": { "metric_id": 2003, "metric_name": "StormSlotAvailablePercentage", "metric_value": 100, "comparison_operator_id": 2003, "comparison_operator": "LTOE", "evaluation_periods": "1" } } ] } } ] }
- Creating a Hybrid Cluster
{ "cluster_version": "MRS 3.1.0-LTS.1", "cluster_name": "mrs_onmm_dm", "cluster_type": "MIXED", "charge_info": { "charge_mode": "postPaid" }, "region": "", "availability_zone": "", "vpc_name": "vpc-37cd", "subnet_name": "subnet-ed99", "components": "Hadoop,Spark2x,HBase,Hive,Hue,Loader,Flink,Oozie,HetuEngine,Ranger,Tez,ZooKeeper,Kafka,Flume", "safe_mode": "KERBEROS", "manager_admin_password": "Mrs@1234", "login_mode": "PASSWORD", "node_root_password": "Mrs@1234", "log_collection": 1, "mrs_ecs_default_agency": "MRS_ECS_DEFAULT_AGENCY", "tags": [ { "key": "tag1", "value": "111" }, { "key": "tag2", "value": "222" } ], "node_groups": [ { "group_name": "master_node_default_group", "node_num": 2, "node_size": "Sit3.4xlarge.4.linux.bigdata", "root_volume": { "type": "SAS", "size": 480 }, "data_volume": { "type": "SAS", "size": 600 }, "data_volume_count": 1 }, { "group_name": "core_node_streaming_group", "node_num": 3, "node_size": "Sit3.4xlarge.4.linux.bigdata", "root_volume": { "type": "SAS", "size": 480 }, "data_volume": { "type": "SAS", "size": 600 }, "data_volume_count": 1 }, { "group_name": "core_node_analysis_group", "node_num": 3, "node_size": "Sit3.4xlarge.4.linux.bigdata", "root_volume": { "type": "SAS", "size": 480 }, "data_volume": { "type": "SAS", "size": 600 }, "data_volume_count": 1, }, { "group_name": "task_node_analysis_group", "node_num": 1, "node_size": "Sit3.4xlarge.4.linux.bigdata", "root_volume": { "type": "SAS", "size": 480 }, "data_volume": { "type": "SAS", "size": 600 }, "data_volume_count": 1 }, { "group_name": "task_node_streaming_group", "node_num": 0, "node_size": "Sit3.4xlarge.4.linux.bigdata", "root_volume": { "type": "SAS", "size": 480 }, "data_volume": { "type": "SAS", "size": 600 }, "data_volume_count": 1 } ] }
- Creating a Customized Cluster with Co-deployed Management and Control Nodes
{ "cluster_version": "MRS 3.1.0-LTS.1", "cluster_name": "mrs_heshe_dm", "cluster_type": "CUSTOM", "charge_info": { "charge_mode": "postPaid" }, "region": "", "availability_zone": "", "vpc_name": "vpc-37cd", "subnet_name": "subnet-ed99", "components": "Hadoop,Spark2x,HBase,Hive,Hue,Kafka,Flume,Flink,Oozie,HetuEngine,Ranger,Tez,ZooKeeper,ClickHouse", "safe_mode": "KERBEROS", "manager_admin_password": "Mrs@1234", "login_mode": "PASSWORD", "node_root_password": "Mrs@1234", "mrs_ecs_default_agency": "MRS_ECS_DEFAULT_AGENCY", "template_id": "mgmt_control_combined_v2", "log_collection": 1, "tags": [ { "key": "tag1", "value": "111" }, { "key": "tag2", "value": "222" } ], "node_groups": [ { "group_name": "master_node_default_group", "node_num": 3, "node_size": "Sit3.4xlarge.4.linux.bigdata", "root_volume": { "type": "SAS", "size": 480 }, "data_volume": { "type": "SAS", "size": 600 }, "data_volume_count": 1, "assigned_roles": [ "OMSServer:1,2", "SlapdServer:1,2", "KerberosServer:1,2", "KerberosAdmin:1,2", "quorumpeer:1,2,3", "NameNode:2,3", "Zkfc:2,3", "JournalNode:1,2,3", "ResourceManager:2,3", "JobHistoryServer:2,3", "DBServer:1,3", "Hue:1,3", "MetaStore:1,2,3", "WebHCat:1,2,3", "HiveServer:1,2,3", "HMaster:2,3", "MonitorServer:1,2", "Nimbus:1,2", "UI:1,2", "JDBCServer2x:1,2,3", "JobHistory2x:2,3", "SparkResource2x:1,2,3", "oozie:2,3", "LoadBalancer:2,3", "TezUI:1,3", "TimelineServer:3", "RangerAdmin:1,2", "UserSync:2", "TagSync:2", "KerberosClient", "SlapdClient", "meta", "HSConsole:2,3", "FlinkResource:1,2,3", "DataNode:1,2,3", "NodeManager:1,2,3", "IndexServer2x:1,2", "ThriftServer:1,2,3", "RegionServer:1,2,3", "ThriftServer1:1,2,3", "RESTServer:1,2,3", "Broker:1,2,3", "Supervisor:1,2,3", "Logviewer:1,2,3", "Flume:1,2,3", "HSBroker:1,2,3" ] }, { "group_name": "node_group_1", "node_num": 3, "node_size": "Sit3.4xlarge.4.linux.bigdata", "root_volume": { "type": "SAS", "size": 480 }, "data_volume": { "type": "SAS", "size": 600 }, "data_volume_count": 1, "assigned_roles": [ "DataNode", "NodeManager", "RegionServer", "Flume:1", "Broker", "Supervisor", "Logviewer", "HBaseIndexer", "KerberosClient", "SlapdClient", "meta", "HSBroker:1,2", "ThriftServer", "ThriftServer1", "RESTServer", "FlinkResource"] }, { "group_name": "node_group_2", "node_num": 1, "node_size": "Sit3.4xlarge.4.linux.bigdata", "root_volume": { "type": "SAS", "size": 480 }, "data_volume": { "type": "SAS", "size": 600 }, "data_volume_count": 1, "assigned_roles": [ "NodeManager", "KerberosClient", "SlapdClient", "meta", "FlinkResource"] } ] }
- Creating a Cluster with Customized Management and Control Planes Deployed Separately
{ "cluster_version": "MRS 3.1.0-LTS.1", "cluster_name": "mrs_jdRU_dm01", "cluster_type": "CUSTOM", "charge_info": { "charge_mode": "postPaid" }, "region": "", "availability_zone": "", "vpc_name": "vpc-37cd", "subnet_name": "subnet-ed99", "components": "Hadoop,Spark2x,HBase,Hive,Hue,Kafka,Flume,Flink,Oozie,HetuEngine,Ranger,Tez,Ranger,Tez,ZooKeeper,ClickHouse", "safe_mode": "KERBEROS", "manager_admin_password": "Mrs@1234", "login_mode": "PASSWORD", "node_root_password": "Mrs@1234", "mrs_ecs_default_agency": "MRS_ECS_DEFAULT_AGENCY", "log_collection": 1, "template_id": "mgmt_control_separated_v2", "tags": [ { "key": "aaa", "value": "111" }, { "key": "bbb", "value": "222" } ], "node_groups": [ { "group_name": "master_node_default_group", "node_num": 5, "node_size": "rc3.4xlarge.4.linux.bigdata", "root_volume": { "type": "SAS", "size": 480 }, "data_volume": { "type": "SAS", "size": 600 }, "data_volume_count": 1, "assigned_roles": [ "OMSServer:1,2", "SlapdServer:3,4", "KerberosServer:3,4", "KerberosAdmin:3,4", "quorumpeer:3,4,5", "NameNode:4,5", "Zkfc:4,5", "JournalNode:1,2,3,4,5", "ResourceManager:4,5", "JobHistoryServer:4,5", "DBServer:3,5", "Hue:1,2", "MetaStore:1,2,3,4,5", "WebHCat:1,2,3,4,5", "HiveServer:1,2,3,4,5", "HMaster:4,5", "MonitorServer:1,2", "Nimbus:1,2", "UI:1,2", "JDBCServer2x:1,2,3,4,5", "JobHistory2x:4,5", "SparkResource2x:1,2,3,4,5", "oozie:1,2", "LoadBalancer:1,2", "TezUI:1,2", "TimelineServer:5", "RangerAdmin:1,2", "KerberosClient", "SlapdClient", "meta", "HSConsole:1,2", "FlinkResource:1,2,3,4,5", "DataNode:1,2,3,4,5", "NodeManager:1,2,3,4,5", "IndexServer2x:1,2", "ThriftServer:1,2,3,4,5", "RegionServer:1,2,3,4,5", "ThriftServer1:1,2,3,4,5", "RESTServer:1,2,3,4,5", "Broker:1,2,3,4,5", "Supervisor:1,2,3,4,5", "Logviewer:1,2,3,4,5", "Flume:1,2,3,4,5", "HBaseIndexer:1,2,3,4,5", "TagSync:1", "UserSync:1"] }, { "group_name": "node_group_1", "node_num": 3, "node_size": "rc3.4xlarge.4.linux.bigdata", "root_volume": { "type": "SAS", "size": 480 }, "data_volume": { "type": "SAS", "size": 600 }, "data_volume_count": 1, "assigned_roles": [ "DataNode", "NodeManager", "RegionServer", "Flume:1", "Broker", "Supervisor", "Logviewer", "HBaseIndexer", "KerberosClient", "SlapdClient", "meta", "HSBroker:1,2", "ThriftServer", "ThriftServer1", "RESTServer", "FlinkResource"] } ] }
- Creating a User-Defined Data Cluster
{ "cluster_version": "MRS 3.1.0-LTS.1", "cluster_name": "mrs_jdRU_dm02", "cluster_type": "CUSTOM", "charge_info": { "charge_mode": "postPaid" }, "region": "", "availability_zone": "", "vpc_name": "vpc-37cd", "subnet_name": "subnet-ed99", "components": "Hadoop,Spark2x,HBase,Hive,Hue,Kafka,Flume,Flink,Oozie,Ranger,Tez,Ranger,Tez,ZooKeeper,ClickHouse", "safe_mode": "KERBEROS", "manager_admin_password": "Mrs@1234", "login_mode": "PASSWORD", "node_root_password": "Mrs@1234", "mrs_ecs_default_agency": "MRS_ECS_DEFAULT_AGENCY", "template_id": "mgmt_control_data_separated_v2", "log_collection": 1, "tags": [ { "key": "aaa", "value": "111" }, { "key": "bbb", "value": "222" } ], "node_groups": [ { "group_name": "master_node_default_group", "node_num": 9, "node_size": "rc3.4xlarge.4.linux.bigdata", "root_volume": { "type": "SAS", "size": 480 }, "data_volume": { "type": "SAS", "size": 600 }, "data_volume_count": 1, "assigned_roles": [ "OMSServer:1,2", "SlapdServer:5,6", "KerberosServer:5,6", "KerberosAdmin:5,6", "quorumpeer:5,6,7,8,9", "NameNode:3,4", "Zkfc:3,4", "JournalNode:5,6,7", "ResourceManager:8,9", "JobHistoryServer:8", "DBServer:8,9", "Hue:8,9", "FlinkResource:3,4", "MetaStore:8,9", "WebHCat:5", "HiveServer:8,9", "HMaster:8,9", "MonitorServer:3,4", "Nimbus:8,9", "UI:8,9", "JDBCServer2x:8,9", "JobHistory2x:8,9", "SparkResource2x:5,6,7", "oozie:4,5", "LoadBalancer:8,9", "TezUI:5,6", "TimelineServer:5", "RangerAdmin:4,5", "UserSync:5", "TagSync:5", "KerberosClient", "SlapdClient", "meta", "HSBroker:5", "HSConsole:3,4", "FlinkResource:3,4"] }, { "group_name": "node_group_1", "node_num": 3, "node_size": "rc3.4xlarge.4.linux.bigdata", "root_volume": { "type": "SAS", "size": 480 }, "data_volume": { "type": "SAS", "size": 600 }, "data_volume_count": 1, "assigned_roles": [ "DataNode", "NodeManager", "RegionServer", "Flume:1", "GraphServer", "KerberosClient", "SlapdClient", "meta", "HSBroker:1,2" ] }, { "group_name": "node_group_2", "node_num": 3, "node_size": "rc3.4xlarge.4.linux.bigdata", "root_volume": { "type": "SAS", "size": 480 }, "data_volume": { "type": "SAS", "size": 600 }, "data_volume_count": 1, "assigned_roles": [ "HBaseIndexer", "SolrServer[3]", "EsNode[2]", "KerberosClient", "SlapdClient", "meta" ] }, { "group_name": "node_group_3", "node_num": 3, "node_size": "rc3.4xlarge.4.linux.bigdata", "root_volume": { "type": "SAS", "size": 480 }, "data_volume": { "type": "SAS", "size": 600 }, "data_volume_count": 1, "assigned_roles": [ "Redis[2]", "KerberosClient", "SlapdClient", "meta"] }, { "group_name": "node_group_4", "node_num": 3, "node_size": "rc3.4xlarge.4.linux.bigdata", "root_volume": { "type": "SAS", "size": 480 }, "data_volume": { "type": "SAS", "size": 600 }, "data_volume_count": 1, "assigned_roles": [ "Broker", "Supervisor", "Logviewer", "KerberosClient", "SlapdClient", "meta"] } ] }
- Creating an Analysis Cluster
- Example response
- Example of a normal response
{ "cluster_id": "da1592c2-bb7e-468d-9ac9-83246e95447a" }
- Failed sample response
{ "error_code": "MRS.0002", "error_msg": "The parameter is invalid." }
- Example of a normal response
Status Code
Table 16 describes the status code of this API.
For the description about error status codes, see Status Codes.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot