Creating a Cluster and Executing a Job
Function
This API is used to create an MRS cluster and submit a job in the cluster. This API is incompatible with Sahara.
You are advised to use the V2 APIs for creating a cluster and creating a cluster and submitting a job.
A maximum of 10 clusters can be concurrently created. You can set the enterprise_project_id parameter to perform fine-grained authorization for resources.
Before using the API, you need to obtain the resources listed in Table 1.
Resource |
How to Obtain |
---|---|
VPC |
See operation instructions in Querying VPCs and Creating a VPC in the VPC API Reference. |
Subnet |
See operation instructions in Querying Subnets and Creating a Subnet in the VPC API Reference. |
Key Pair |
See operation instructions in Querying SSH Key Pairs and Creating and Importing an SSH Key Pair in the ECS API Reference. |
Zone |
See Endpoints for details about regions and AZs. |
Version |
Currently, MRS 1.9.2, 3.1.0, 3.1.5, 3.1.2-LTS.3, and 3.2.0-LTS.1 are supported. |
Component |
|
Constraints
- You can log in to a cluster using either a password or a key pair.
- To use the password mode, you need to configure the password of user root for accessing the cluster node, that is, cluster_master_secret.
- To use the key pair mode, you need to configure the key pair name, that is, node_public_cert_name.
- Disk parameters can be represented either by volume_type and volume_size, or by multi-disk parameters (master_data_volume_type, master_data_volume_size, master_data_volume_count, core_data_volume_type, core_data_volume_size, and core_data_volume_count).
Debugging
You can debug this API in API Explorer. Automatic authentication is supported. API Explorer can automatically generate sample SDK code and provide the sample SDK code debugging.
URI
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
project_id |
Yes |
String |
Explanation Project ID. For details about how to obtain the project ID, see Obtaining a Project ID. Constraints N/A Value range The value must consist of 1 to 64 characters. Only letters and digits are allowed. Default value N/A |
Request Parameters
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
cluster_version |
Yes |
String |
Explanation Cluster version, for example, MRS 3.1.0. Constraints N/A Value range
Default value N/A |
cluster_name |
Yes |
String |
Explanation Cluster name. It must be unique. Constraints N/A Value range The value can contain 1 to 64 characters, including only letters, digits, underscores (_), and hyphens (-). Default value N/A |
master_node_num |
No |
Integer |
Explanation Number of Master nodes. Constraints If cluster HA is enabled, set this parameter to 2. If cluster HA is disabled, set this parameter to 1. This parameter cannot be set to 1 in MRS 3.x. Value range N/A Default value N/A |
core_node_num |
No |
Integer |
Explanation Number of Core nodes. The default maximum number of core nodes is 500. If more than 500 core nodes are required, apply for a higher quota. Constraints N/A Value range 1-500 Default value N/A |
billing_type |
Yes |
Integer |
Explanation Cluster billing mode. Constraints N/A Value range 12: The cluster is billed on a pay-per-use basis. Only pay-per-use clusters can be created by calling APIs. Default value N/A |
data_center |
Yes |
String |
Explanation The information about the region where the cluster is located. For details, see Endpoints. Constraints N/A Value range N/A Default value N/A |
vpc |
Yes |
String |
Explanation The name of the VPC where the subnet is located. Obtain the VPC name by performing the following operations on the VPC management console:
Constraints N/A Value range N/A Default value N/A |
master_node_size |
No |
String |
Explanation Specifications of the node, for example, {ECS_FLAVOR_NAME}.linux.bigdata. {ECS_FLAVOR_NAME} can be c3.4xlare.2 or other flavors that are displayed on the MRS purchase page. The supported host specifications are determined by CPU, memory, and disk space. For details about instance specifications, see ECS Specifications Used by MRS and BMS Specifications Used by MRS. You are advised to obtain the specifications supported by the corresponding version in the corresponding region from the cluster creation page on the MRS console. Constraints N/A Value range N/A Default value N/A |
core_node_size |
No |
String |
Explanation Specifications of the Core node, for example, {ECS_FLAVOR_NAME}.linux.bigdata. {ECS_FLAVOR_NAME} can be c3.4xlare.2 or other flavors that are displayed on the MRS purchase page. For details about instance specifications, see ECS Specifications Used by MRS and BMS Specifications Used by MRS. You are advised to obtain the specifications supported by the corresponding version in the corresponding region from the cluster creation page on the MRS console. Constraints N/A Value range N/A Default value N/A |
component_list |
Yes |
Array of component_list objects |
Explanation The list of service components to be installed. For details about the parameters, see Table 4. Constraints N/A Value range N/A Default value N/A |
available_zone_id |
Yes |
String |
Explanation AZ ID. You can obtain the IDs of some AZ by calling the API for querying AZ information. Constraints N/A Value range
Default value N/A |
vpc_id |
Yes |
String |
Explanation The ID of the VPC where the subnet is located. Obtain the VPC ID by performing the following operations on the VPC management console:
Constraints N/A Value range N/A Default value N/A |
subnet_id |
Yes |
String |
Explanation Subnet ID Obtain the subnet ID by performing the following operations on the VPC management console:
Constraints At least one of subnet_id and subnet_name must be configured. If the two parameters are configured but do not match the same subnet, the cluster fails to create. subnet_id is recommended. Value range N/A Default value N/A |
subnet_name |
Yes |
String |
Explanation The subnet name. Obtain the subnet name by performing the following operations on the VPC management console:
Constraints At least one of subnet_id and subnet_name must be configured. If the two parameters are configured but do not match the same subnet, the cluster fails to create. If only subnet_name is configured and subnets with the same name exist in the VPC, the first subnet name in the VPC is used when a cluster is created. subnet_id is recommended. Value range N/A Default value N/A |
security_groups_id |
No |
String |
Explanation The ID of the security group configured for the cluster.
Constraints N/A Value range N/A Default value N/A |
add_jobs |
No |
Array of add_jobs objects |
Explanation Jobs can be submitted when a cluster is created. Currently, only one job can be created. For details about the parameters, see Table 5. Constraints There must be no more than 1 record. Value range N/A Default value N/A |
volume_size |
No |
Integer |
Explanation Data disk storage space of Master and Core nodes, in GB To increase the data storage capacity, you can add disks when creating a cluster. Select a proper disk storage space based on the following application scenarios:
Constraints This parameter is not recommended. For details, see the description of the volume_type parameter. Value range 100-32000 Default value N/A |
volume_type |
No |
String |
Explanation The data disk storage type of master and core nodes. Currently, SATA, SAS, SSD, and GPSSD are supported. Disk parameters can be represented by volume_type and volume_size, or multi-disk parameters. If the volume_type and volume_size parameters coexist with the multi-disk parameters, the system reads the volume_type and volume_size parameters first. You are advised to use the multi-disk parameters. Constraints N/A Value range
Default value N/A |
master_data_volume_type |
No |
String |
Explanation This parameter is a multi-disk parameter, indicating the data disk storage type of the master node. Currently, SATA, SAS, SSD, and GPSSD are supported. Constraints N/A Value range
Default value N/A |
master_data_volume_size |
No |
Integer |
Explanation This parameter is a multi-disk parameter, indicating the data disk storage space of master nodes. To increase the data storage capacity, you can add disks when creating a cluster. You only need to pass in a number without the unit GB. Constraints N/A Value range 100-32000 Default value N/A |
master_data_volume_count |
No |
Integer |
Explanation This parameter is a multi-disk parameter, indicating the number of data disks of the master nodes. Constraints N/A Value range The value can only be 1. Default value 1 |
core_data_volume_type |
No |
String |
Explanation This parameter is a multi-disk parameter, indicating the data disk storage type of core nodes. Currently, SATA, SAS, SSD, and GPSSD are supported. Constraints N/A Value range
Default value N/A |
core_data_volume_size |
No |
Integer |
Explanation This parameter is a multi-disk parameter, indicating the data disk storage space of core nodes. To increase the data storage capacity, you can add disks when creating a cluster. You only need to pass in a number without the unit GB. Constraints N/A Value range 100-32000 Default value N/A |
core_data_volume_count |
No |
Integer |
Explanation This parameter is a multi-disk parameter, indicating the number of data disks of the core nodes. Constraints N/A Value range 1-20 Default value N/A |
task_node_groups |
No |
Array of task_node_groups objects |
Explanation The list of task nodes. For details about the parameters, see Table 6. Constraints There must be no more than 1 record. Value range N/A Default value N/A |
bootstrap_scripts |
No |
Array of BootstrapScript objects |
Explanation The Bootstrap action script information. For details about the parameters, see Table 8. Constraints N/A Value range N/A Default value N/A |
node_public_cert_name |
No |
String |
Explanation The name of a key pair. You can use a key pair to log in to a cluster node. Constraints If login_mode is set to 1, the request body contains the node_public_cert_name field. Value range N/A Default value N/A |
cluster_admin_secret |
No |
String |
Explanation Password of the MRS Manager administrator. Constraints N/A Value range
Default value N/A |
cluster_master_secret |
No |
String |
Explanation The password of user root for logging in to a cluster node. Constraints If login_mode is set to 0, the request body contains the cluster_master_secret field. Value range A password must meet the following complexity requirements:
Default value N/A |
safe_mode |
Yes |
Integer |
Explanation The running mode of an MRS cluster. Constraints N/A Value range
Default value N/A |
tags |
No |
Array of tag objects |
Explanation The cluster tags. For details about the parameters, see Table 9. Constraints A maximum of 20 tags can be used in a cluster. The tag name (key) must be unique. The tag key and value can contain letters, digits, spaces, and special characters (_.:=+-@), but cannot start or end with a space or start with _sys_. Value range N/A Default value N/A |
cluster_type |
No |
Integer |
Explanation The cluster type. Currently, hybrid clusters cannot be created using APIs. Constraints N/A Value range
Default value 0 |
log_collection |
No |
Integer |
Explanation Whether to collect logs when cluster creation fails. The default value is 1, indicating that OBS buckets are created only for collecting logs when an MRS cluster fails to create. Constraints N/A Value range
Default value 1 |
enterprise_project_id |
No |
String |
Explanation Enterprise project ID When you create a cluster, associate the enterprise project ID with the cluster. The default value is 0, indicating the default enterprise project. To obtain the enterprise project ID, see the id value in the enterprise_project field data structure table in "Querying the Enterprise Project List" in Enterprise Management API Reference. Constraints N/A Value range N/A Default value 0 |
login_mode |
No |
Integer |
Explanation Cluster login mode. Constraints
Value range
Default value 1 |
node_groups |
No |
Array of NodeGroupV11 objects |
Explanation List of nodes. For details about the parameters, see Table 10. Constraints Configure either this parameter or the following parameters: master_node_num, master_node_size, core_node_num, core_node_size, master_data_volume_type, master_data_volume_size, master_data_volume_count, core_data_volume_type, core_data_volume_size, core_data_volume_count, volume_type, volume_size, task_node_groups Value range N/A Default value N/A |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
component_name |
Yes |
String |
Explanation Component name. For details, see the component information in Table 1. Constraints N/A Value range The value can contain 1 to 64 characters, including only letters, digits, underscores (_), and hyphens (-). Default value N/A |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
job_type |
Yes |
Integer |
Explanation Job type code. Constraints N/A Value range
Default value N/A |
job_name |
Yes |
String |
Explanation Job name. Constraints N/A Value range The value can contain 1 to 64 characters. Only letters, digits, hyphens (-), and underscores (_) are allowed.
NOTE:
Identical job names are allowed but not recommended. Default value N/A |
jar_path |
No |
String |
Explanation Path of the .jar file or .sql file to be executed. Constraints N/A Value range The value must meet the following requirements:
Default value N/A |
arguments |
No |
String |
Explanation The key parameter for program execution. The parameter is specified by the function of the user's program. MRS is only responsible for loading the parameter. Constraints N/A Value range The parameter can contain 0 to 150,000 characters, but special characters (;|&>'<$) are not allowed. Default value N/A |
input |
No |
String |
Explanation The data input path.
Files can be stored in HDFS or OBS. The path varies depending on the file system.
Constraints N/A Value range The value can contain 0 to 1,023 characters, but special characters (;|&>'<$) are not allowed. Default value N/A |
output |
No |
String |
Explanation The data output path.
Files can be stored in HDFS or OBS. The path varies depending on the file system.
If the specified path does not exist, the system will automatically create it. Constraints N/A Value range The value can contain 0 to 1,023 characters, but special characters (;|&>'<$) are not allowed. Default value N/A |
job_log |
No |
String |
Explanation The path for storing job logs that record job running status.
Files can be stored in HDFS or OBS. The path varies depending on the file system.
Constraints N/A Value range The value can contain 0 to 1,023 characters, but special characters (;|&>'<$) are not allowed. Default value N/A |
shutdown_cluster |
No |
Boolean |
Explanation Whether to delete the cluster after the job execution is complete. Constraints N/A Value range
Default value N/A |
file_action |
No |
String |
Explanation The action to be performed on a file. Constraints N/A Value range
Default value N/A |
submit_job_once_cluster_run |
Yes |
Boolean |
Explanation Whether to submit a job when creating a cluster. Set it to true. Constraints N/A Value range
Default value N/A |
hql |
No |
String |
Explanation The HQL script statement. Constraints N/A Value range N/A Default value N/A |
hive_script_path |
No |
String |
Explanation SQL program path. This parameter is required by Spark Script and Hive Script jobs only. Constraints N/A Value range The value must meet the following requirements:
Default value N/A |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
node_num |
Yes |
Integer |
Explanation Number of Task nodes. Constraints The total number of Core and Task nodes cannot exceed 500. Value range 0-500 Default value N/A |
node_size |
Yes |
String |
Explanation Specifications of the Task node, for example, {ECS_FLAVOR_NAME}.linux.bigdata. {ECS_FLAVOR_NAME} can be c3.4xlare.2 or other flavors that are displayed on the MRS purchase page. For details about instance specifications, see ECS Specifications Used by MRS and BMS Specifications Used by MRS. Obtain the instance specifications of the corresponding version in the corresponding region from the cluster creation page of the MRS management console. Constraints N/A Value range N/A Default value N/A |
data_volume_type |
Yes |
String |
Explanation Data disk storage type of the Task node. Supported types include SATA, SAS, and SSD. Constraints N/A Value range
Default value N/A |
data_volume_count |
Yes |
Integer |
Explanation Number of data disks of a Task node. Constraints N/A Value range 0-20 Default value N/A |
data_volume_size |
Yes |
Integer |
Explanation Data disk storage space of a Task node. You only need to pass in a number without the unit GB. Constraints N/A Value range 100-32000 Default value N/A |
auto_scaling_policy |
No |
auto_scaling_policy object |
Explanation The auto scaling policy. For details, see Table 7. Constraints N/A Value range N/A Default value N/A |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
auto_scaling_enable |
Yes |
Boolean |
Explanation Whether to enable the auto scaling policy. Constraints N/A Value range
Default value N/A |
min_capacity |
Yes |
Integer |
Explanation The minimum number of nodes reserved in the node group. Constraints N/A Value range 0-500 Default value N/A |
max_capacity |
Yes |
Integer |
Explanation The maximum number of nodes in the node group. Constraints N/A Value range 0-500 Default value N/A |
resources_plans |
No |
Array of resources_plan objects |
Explanation The resource plan list. For details, see Table 11. If this parameter is left blank, the resource plan is disabled. Constraints When auto_scaling_enable is set to true, either this parameter or rules must be configured. There must be no more than 5 records. Value range N/A Default value N/A |
exec_scripts |
No |
Array of scale_script objects |
Explanation The list of custom scaling automation scripts. For details, see Table 14. If this parameter is left blank, a hook script is disabled. Constraints The number of records cannot exceed 10. Value range N/A Default value N/A |
rules |
No |
Array of rules objects |
Explanation The list of auto scaling rules. For details, see Table 12. Constraints When auto_scaling_enable is set to true, either this parameter or resources_plans must be configured. The number of records cannot exceed 10. Value range N/A Default value N/A |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
name |
Yes |
String |
Explanation Name of a bootstrap action script. Constraints N/A Value range The names of bootstrap action scripts in the same cluster must be unique. The value can contain 1 to 64 characters, including only letters, digits, underscores (_), and hyphens (-), and cannot start with a space. Default value N/A |
uri |
Yes |
String |
Explanation The path of a Bootstrap action script. Set this parameter to an OBS bucket path or a local VM path.
Constraints N/A Value range N/A Default value N/A |
parameters |
No |
String |
Explanation The bootstrap action script parameters. Constraints N/A Value range N/A Default value N/A |
nodes |
Yes |
Array of strings |
Explanation The type of a node where the bootstrap action script is executed. The value can be Master, Core, or Task. Constraints The node type must be represented in lowercase letters. Value range N/A Default value N/A |
active_master |
No |
Boolean |
Explanation Whether the bootstrap action script runs only on active master nodes. Constraints N/A Value range
Default value false |
before_component_start |
No |
Boolean |
Explanation Time when the bootstrap action script is executed. Currently, the following two options are available: Before component start and After component start Constraints N/A Value range
Default value false |
fail_action |
Yes |
String |
Explanation Whether to continue executing subsequent scripts and creating a cluster after the Bootstrap action script fails to be executed. You are advised to set this parameter to continue in the commissioning phase so that the cluster can continue to be installed and started no matter whether the bootstrap action is successful. Constraints N/A Value range
Default value errorout |
start_time |
No |
Long |
Explanation The execution time of one bootstrap action script. Constraints N/A Value range N/A Default value N/A |
state |
No |
String |
Explanation The running status of one bootstrap action script. Constraints N/A Value range
Default value N/A |
action_stages |
No |
Array of strings |
Explanation Select the time when the bootstrap action script is executed.
Constraints N/A Value range N/A Default value N/A |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
key |
Yes |
String |
Explanation Tag key. Constraints N/A Value range
Default value N/A |
value |
Yes |
String |
Explanation Tag value. Constraints N/A Value range
Default value N/A |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
group_name |
Yes |
String |
Explanation The node group name. Constraints N/A Value range
Default value N/A |
node_num |
Yes |
Integer |
Explanation Number of nodes. Constraints The total number of Core and Task nodes cannot exceed 500. Value range 0-500 Default value N/A |
node_size |
Yes |
String |
Explanation Specifications of the node, for example, {ECS_FLAVOR_NAME}.linux.bigdata. {ECS_FLAVOR_NAME} can be c3.4xlare.2 or other flavors that are displayed on the MRS purchase page. The host specifications supported by MRS are determined by CPU, memory, and disk space. For details about instance specifications, see ECS Specifications Used by MRS and BMS Specifications Used by MRS. You are advised to obtain the specifications supported by the corresponding version in the corresponding region from the cluster creation page on the MRS console. Constraints N/A Value range N/A Default value N/A |
root_volume_size |
No |
String |
Explanation The system disk storage space of a node. Constraints N/A Value range N/A Default value N/A |
root_volume_type |
No |
String |
Explanation System disk storage type of a node. Supported types include SATA, SAS, and SSD. Constraints N/A Value range
Default value N/A |
data_volume_type |
No |
String |
Explanation Data disk storage type of nodes. Supported types include SATA, SAS, and SSD. Constraints N/A Value range
Default value N/A |
data_volume_count |
No |
Integer |
Explanation Number of data disks of a node. Constraints N/A Value range 0-20 Default value N/A |
data_volume_size |
No |
Integer |
Explanation Data disk storage space of a node. Unit: GB. Constraints N/A Value range 100-32000 Default value N/A |
auto_scaling_policy |
No |
auto_scaling_policy object |
Explanation The auto scaling policy. Constraints The auto scaling rule information. This parameter is available only when group_name is set to task_node_analysis_group or task_node_streaming_group. For details about the parameters, see Table 7. Value range N/A Default value N/A |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
period_type |
Yes |
String |
Explanation Cycle type of a resource plan. This parameter can be set to daily only. Constraints N/A Value range N/A Default value N/A |
start_time |
Yes |
String |
Explanation The start time of a resource plan. The value is in the format of hour:minute, indicating that the time ranges from 00:00 to 23:59. Constraints N/A Value range N/A Default value N/A |
end_time |
Yes |
String |
Explanation End time of a resource plan. The format is the same as that of start_time. Constraints The value cannot be earlier than the start_time, and the interval between start_time and start_time cannot be less than 30 minutes. Value range N/A Default value N/A |
min_capacity |
Yes |
Integer |
Explanation Minimum number of the preserved nodes in a node group in a resource plan. Constraints N/A Value range 0-500 Default value N/A |
max_capacity |
Yes |
Integer |
Explanation Maximum number of the preserved nodes in a node group in a resource plan. Constraints N/A Value range 0-500 Default value N/A |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
name |
Yes |
String |
Explanation Name of an auto scaling rule. Constraints N/A Value range It contains only 1 to 64 characters. Only letters, digits, hyphens (-), and underscores (_) are allowed. Rule names must be unique in a node group. Default value N/A |
description |
No |
String |
Explanation Description about an auto scaling rule. Constraints N/A Value range It contains a maximum of 1024 characters. Default value N/A |
adjustment_type |
Yes |
String |
Explanation Auto scaling rule adjustment type. Constraints N/A Value range
Default value N/A |
cool_down_minutes |
Yes |
Integer |
Explanation Cluster cooling time after an auto scaling rule is triggered, when no auto scaling operation is performed. The unit is minute. Constraints N/A Value range The value ranges from 0 to 10080. 10080 indicates the number of minutes in a week. Default value N/A |
scaling_adjustment |
Yes |
Integer |
Explanation Number of nodes that can be adjusted once. Constraints N/A Value range 1-100 Default value N/A |
trigger |
Yes |
trigger object |
Explanation Condition for triggering a rule. For details, see Table 13. Constraints N/A Value range N/A Default value N/A |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
metric_name |
Yes |
String |
Explanation Metric name. This triggering condition makes a judgment according to the value of the metric. For details about metric names, see Configuring Auto Scaling for an MRS Cluster. Constraints N/A Value range A metric name contains a maximum of 64 characters. Default value N/A |
metric_value |
Yes |
String |
Explanation Metric threshold to trigger a rule The value must be an integer or a number with two decimal places. Constraints N/A Value range Only integers or numbers with two decimal places are allowed. Default value N/A |
comparison_operator |
No |
String |
Explanation Metric judgment logic operator. Constraints N/A Value range
Default value N/A |
evaluation_periods |
Yes |
Integer |
Explanation The number of consecutive five-minute periods, during which a metric threshold is reached. Constraints N/A Value range 1-200 Default value N/A |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
name |
Yes |
String |
Explanation Names of custom scaling automation scripts. Constraints N/A Value range The names in the same cluster must be unique. The value can contain 1 to 64 characters, including only digits, letters, spaces, hyphens (-), and underscores (_), and cannot start with a space. Default value N/A |
uri |
Yes |
String |
Explanation Path of a custom automation script. Set this parameter to an OBS bucket path or a local VM path.
Constraints N/A Value range N/A Default value N/A |
parameters |
No |
String |
Explanation Parameters of a custom automation script. Separate multiple parameters by spaces. The following predefined parameters can be transferred:
Other user-defined parameters are used in the same way as those of common shell scripts. Parameters are separated by space. Constraints N/A Value range N/A Default value N/A |
nodes |
Yes |
Array of string |
Explanation Type of a node where the custom automation script is executed. The node type can be Master, Core, or Task. Constraints N/A Value range N/A Default value N/A |
active_master |
No |
Boolean |
Explanation Whether the custom automation script runs only on the active master node. Constraints N/A Value range
Default value false |
action_stage |
Yes |
String |
Explanation Time when a script is executed. Constraints N/A Value range
Default value N/A |
fail_action |
Yes |
String |
Explanation Whether to continue to execute subsequent scripts and create a cluster after the custom automation script fails to be executed. You are advised to set this parameter to continue in the commissioning phase so the cluster can continue to be installed and started no matter whether the custom automation script is executed successfully. The scale-in operation cannot be undone. fail_action must be set to continue for the scripts that are executed after scale-in. Constraints N/A Value range
Default value N/A |
Response Parameters
Status code: 200
Parameter |
Type |
Description |
---|---|---|
cluster_id |
String |
Explanation Cluster ID, which is returned by the system after the cluster is created. Constraints N/A Value range N/A Default value N/A |
result |
Boolean |
Explanation Operation result Constraints N/A Value range
Default value N/A |
msg |
String |
Explanation System message, which can be empty. Constraints N/A Value range N/A Default value N/A |
Example Request
- Use the node_groups parameter group to create a cluster with the HA function enabled. The cluster version is MRS 3.2.0-LTS.1.
POST https://{endpoint}/v1.1/{project_id}/run-job-flow { "billing_type" : 12, "data_center" : "", "available_zone_id" : "0e7a368b6c54493e94ad32666b47e23e", "cluster_name" : "mrs_HEbK", "cluster_version" : "MRS 3.2.0-LTS.1", "safe_mode" : 0, "cluster_type" : 0, "component_list" : [ { "component_name" : "Hadoop" }, { "component_name" : "Spark2x" }, { "component_name" : "HBase" }, { "component_name" : "Hive" }, { "component_name" : "Zookeeper" }, { "component_name" : "Tez" }, { "component_name" : "Hue" }, { "component_name" : "Loader" }, { "component_name" : "Flink" } ], "vpc" : "vpc-4b1c", "vpc_id" : "4a365717-67be-4f33-80c5-98e98a813af8", "subnet_id" : "67984709-e15e-4e86-9886-d76712d4e00a", "subnet_name" : "subnet-4b44", "security_groups_id" : "4820eace-66ad-4f2c-8d46-cf340e3029dd", "enterprise_project_id" : "0", "tags" : [ { "key" : "key1", "value" : "value1" }, { "key" : "key2", "value" : "value2" } ], "node_groups" : [ { "group_name" : "master_node_default_group", "node_num" : 2, "node_size" : "s3.xlarge.2.linux.bigdata", "root_volume_size" : 480, "root_volume_type" : "SATA", "data_volume_type" : "SATA", "data_volume_count" : 1, "data_volume_size" : 600 }, { "group_name" : "core_node_analysis_group", "node_num" : 3, "node_size" : "s3.xlarge.2.linux.bigdata", "root_volume_size" : 480, "root_volume_type" : "SATA", "data_volume_type" : "SATA", "data_volume_count" : 1, "data_volume_size" : 600 }, { "group_name" : "task_node_analysis_group", "node_num" : 2, "node_size" : "s3.xlarge.2.linux.bigdata", "root_volume_size" : 480, "root_volume_type" : "SATA", "data_volume_type" : "SATA", "data_volume_count" : 0, "data_volume_size" : 600, "auto_scaling_policy" : { "auto_scaling_enable" : true, "min_capacity" : 1, "max_capacity" : "3", "resources_plans" : [ { "period_type" : "daily", "start_time" : "9:50", "end_time" : "10:20", "min_capacity" : 2, "max_capacity" : 3 }, { "period_type" : "daily", "start_time" : "10:20", "end_time" : "12:30", "min_capacity" : 0, "max_capacity" : 2 } ], "exec_scripts" : [ { "name" : "before_scale_out", "uri" : "s3a://XXX/zeppelin_install.sh", "parameters" : "${mrs_scale_node_num} ${mrs_scale_type} xxx", "nodes" : [ "master", "core", "task" ], "active_master" : "true", "action_stage" : "before_scale_out", "fail_action" : "continue" }, { "name" : "after_scale_out", "uri" : "s3a://XXX/storm_rebalance.sh", "parameters" : "${mrs_scale_node_hostnames} ${mrs_scale_node_ips}", "nodes" : [ "master", "core", "task" ], "active_master" : "true", "action_stage" : "after_scale_out", "fail_action" : "continue" } ], "rules" : [ { "name" : "default-expand-1", "adjustment_type" : "scale_out", "cool_down_minutes" : 5, "scaling_adjustment" : 1, "trigger" : { "metric_name" : "YARNMemoryAvailablePercentage", "metric_value" : "25", "comparison_operator" : "LT", "evaluation_periods" : 10 } }, { "name" : "default-shrink-1", "adjustment_type" : "scale_in", "cool_down_minutes" : 5, "scaling_adjustment" : 1, "trigger" : { "metric_name" : "YARNMemoryAvailablePercentage", "metric_value" : "70", "comparison_operator" : "GT", "evaluation_periods" : 10 } } ] } } ], "login_mode" : 1, "cluster_master_secret" : "", "cluster_admin_secret" : "", "log_collection" : 1, "add_jobs" : [ { "job_type" : 1, "job_name" : "tenji111", "jar_path" : "s3a://bigdata/program/hadoop-mapreduce-examples-2.7.2.jar", "arguments" : "wordcount", "input" : "s3a://bigdata/input/wd_1k/", "output" : "s3a://bigdata/ouput/", "job_log" : "s3a://bigdata/log/", "shutdown_cluster" : true, "file_action" : "", "submit_job_once_cluster_run" : true, "hql" : "", "hive_script_path" : "" } ], "bootstrap_scripts" : [ { "name" : "Modify os config", "uri" : "s3a://XXX/modify_os_config.sh", "parameters" : "param1 param2", "nodes" : [ "master", "core", "task" ], "active_master" : "false", "before_component_start" : "true", "start_time" : "1667892101", "state" : "IN_PROGRESS", "fail_action" : "continue", "action_stages" : [ "BEFORE_COMPONENT_FIRST_START", "BEFORE_SCALE_IN" ] }, { "name" : "Install zepplin", "uri" : "s3a://XXX/zeppelin_install.sh", "parameters" : "", "nodes" : [ "master" ], "active_master" : "true", "before_component_start" : "false", "start_time" : "1667892101", "state" : "IN_PROGRESS", "fail_action" : "continue", "action_stages" : [ "AFTER_SCALE_IN", "AFTER_SCALE_OUT" ] } ] }
- Create a cluster with the HA function enabled without using the node_groups parameter group. The cluster version is MRS 3.2.0-LTS.1.
POST https://{endpoint}/v1.1/{project_id}/run-job-flow { "billing_type" : 12, "data_center" : "", "master_node_num" : 2, "master_node_size" : "s3.2xlarge.2.linux.bigdata", "core_node_num" : 3, "core_node_size" : "s3.2xlarge.2.linux.bigdata", "available_zone_id" : "0e7a368b6c54493e94ad32666b47e23e", "cluster_name" : "newcluster", "vpc" : "vpc1", "vpc_id" : "5b7db34d-3534-4a6e-ac94-023cd36aaf74", "subnet_id" : "815bece0-fd22-4b65-8a6e-15788c99ee43", "subnet_name" : "subnet", "security_groups_id" : "845bece1-fd22-4b45-7a6e-14338c99ee43", "tags" : [ { "key" : "key1", "value" : "value1" }, { "key" : "key2", "value" : "value2" } ], "cluster_version" : "MRS 3.2.0-LTS.1", "cluster_type" : 0, "master_data_volume_type" : "SATA", "master_data_volume_size" : 600, "master_data_volume_count" : 1, "core_data_volume_type" : "SATA", "core_data_volume_size" : 600, "core_data_volume_count" : 2, "node_public_cert_name" : "SSHkey-bba1", "safe_mode" : 0, "log_collection" : 1, "task_node_groups" : [ { "node_num" : 2, "node_size" : "s3.xlarge.2.linux.bigdata", "data_volume_type" : "SATA", "data_volume_count" : 1, "data_volume_size" : 600, "auto_scaling_policy" : { "auto_scaling_enable" : true, "min_capacity" : 1, "max_capacity" : "3", "resources_plans" : [ { "period_type" : "daily", "start_time" : "9: 50", "end_time" : "10: 20", "min_capacity" : 2, "max_capacity" : 3 }, { "period_type" : "daily", "start_time" : "10: 20", "end_time" : "12: 30", "min_capacity" : 0, "max_capacity" : 2 } ], "exec_scripts" : [ { "name" : "before_scale_out", "uri" : "s3a: //XXX/zeppelin_install.sh", "parameters" : "${mrs_scale_node_num}${mrs_scale_type}xxx", "nodes" : [ "master", "core", "task" ], "active_master" : "true", "action_stage" : "before_scale_out", "fail_action" : "continue" }, { "name" : "after_scale_out", "uri" : "s3a: //XXX/storm_rebalance.sh", "parameters" : "${mrs_scale_node_hostnames}${mrs_scale_node_ips}", "nodes" : [ "master", "core", "task" ], "active_master" : "true", "action_stage" : "after_scale_out", "fail_action" : "continue" } ], "rules" : [ { "name" : "default-expand-1", "adjustment_type" : "scale_out", "cool_down_minutes" : 5, "scaling_adjustment" : 1, "trigger" : { "metric_name" : "YARNMemoryAvailablePercentage", "metric_value" : "25", "comparison_operator" : "LT", "evaluation_periods" : 10 } }, { "name" : "default-shrink-1", "adjustment_type" : "scale_in", "cool_down_minutes" : 5, "scaling_adjustment" : 1, "trigger" : { "metric_name" : "YARNMemoryAvailablePercentage", "metric_value" : "70", "comparison_operator" : "GT", "evaluation_periods" : 10 } } ] } } ], "component_list" : [ { "component_name" : "Hadoop" }, { "component_name" : "Spark" }, { "component_name" : "HBase" }, { "component_name" : "Hive" } ], "add_jobs" : [ { "job_type" : 1, "job_name" : "tenji111", "jar_path" : "s3a: //bigdata/program/hadoop-mapreduce-examples-2.7.2.jar", "arguments" : "wordcount", "input" : "s3a: //bigdata/input/wd_1k/", "output" : "s3a: //bigdata/ouput/", "job_log" : "s3a: //bigdata/log/", "shutdown_cluster" : true, "file_action" : "", "submit_job_once_cluster_run" : true, "hql" : "", "hive_script_path" : "" } ], "bootstrap_scripts" : [ { "name" : "Modifyosconfig", "uri" : "s3a: //XXX/modify_os_config.sh", "parameters" : "param1param2", "nodes" : [ "master", "core", "task" ], "active_master" : "false", "before_component_start" : "true", "start_time" : "1667892101", "state" : "IN_PROGRESS", "fail_action" : "continue", "action_stages" : [ "BEFORE_COMPONENT_FIRST_START", "BEFORE_SCALE_IN" ] }, { "name" : "Installzepplin", "uri" : "s3a: //XXX/zeppelin_install.sh", "parameters" : "", "nodes" : [ "master" ], "active_master" : "true", "before_component_start" : "false", "start_time" : "1667892101", "state" : "IN_PROGRESS", "fail_action" : "continue", "action_stages" : [ "AFTER_SCALE_IN", "AFTER_SCALE_OUT" ] } ] }
- Use the node_groups parameter group to create a cluster with the HA function disabled. The cluster version is MRS 3.2.0-LTS.1.
POST https://{endpoint}/v1.1/{project_id}/run-job-flow { "billing_type" : 12, "data_center" : "", "available_zone_id" : "0e7a368b6c54493e94ad32666b47e23e", "cluster_name" : "mrs_HEbK", "cluster_version" : "MRS 3.2.0-LTS.1", "safe_mode" : 0, "cluster_type" : 0, "component_list" : [ { "component_name" : "Hadoop" }, { "component_name" : "Spark2x" }, { "component_name" : "HBase" }, { "component_name" : "Hive" }, { "component_name" : "Zookeeper" }, { "component_name" : "Tez" }, { "component_name" : "Hue" }, { "component_name" : "Loader" }, { "component_name" : "Flink" } ], "vpc" : "vpc-4b1c", "vpc_id" : "4a365717-67be-4f33-80c5-98e98a813af8", "subnet_id" : "67984709-e15e-4e86-9886-d76712d4e00a", "subnet_name" : "subnet-4b44", "security_groups_id" : "4820eace-66ad-4f2c-8d46-cf340e3029dd", "enterprise_project_id" : "0", "tags" : [ { "key" : "key1", "value" : "value1" }, { "key" : "key2", "value" : "value2" } ], "node_groups" : [ { "group_name" : "master_node_default_group", "node_num" : 1, "node_size" : "s3.xlarge.2.linux.bigdata", "root_volume_size" : 480, "root_volume_type" : "SATA", "data_volume_type" : "SATA", "data_volume_count" : 1, "data_volume_size" : 600 }, { "group_name" : "core_node_analysis_group", "node_num" : 1, "node_size" : "s3.xlarge.2.linux.bigdata", "root_volume_size" : 480, "root_volume_type" : "SATA", "data_volume_type" : "SATA", "data_volume_count" : 1, "data_volume_size" : 600 } ], "login_mode" : 1, "cluster_master_secret" : "", "cluster_admin_secret" : "", "log_collection" : 1, "add_jobs" : [ { "job_type" : 1, "job_name" : "tenji111", "jar_path" : "s3a://bigdata/program/hadoop-mapreduce-examples-2.7.2.jar", "arguments" : "wordcount", "input" : "s3a://bigdata/input/wd_1k/", "output" : "s3a://bigdata/ouput/", "job_log" : "s3a://bigdata/log/", "shutdown_cluster" : true, "file_action" : "", "submit_job_once_cluster_run" : true, "hql" : "", "hive_script_path" : "" } ], "bootstrap_scripts" : [ { "name" : "Modify os config", "uri" : "s3a://XXX/modify_os_config.sh", "parameters" : "param1 param2", "nodes" : [ "master", "core", "task" ], "active_master" : "false", "before_component_start" : "true", "start_time" : "1667892101", "state" : "IN_PROGRESS", "fail_action" : "continue", "action_stages" : [ "BEFORE_COMPONENT_FIRST_START", "BEFORE_SCALE_IN" ] }, { "name" : "Install zepplin", "uri" : "s3a://XXX/zeppelin_install.sh", "parameters" : "", "nodes" : [ "master" ], "active_master" : "true", "before_component_start" : "false", "start_time" : "1667892101", "state" : "IN_PROGRESS", "fail_action" : "continue", "action_stages" : [ "AFTER_SCALE_IN", "AFTER_SCALE_OUT" ] } ] }
- Create a cluster with the HA function disabled without using the node_groups parameter group. The cluster version is MRS 3.2.0-LTS.1.
POST https://{endpoint}/v1.1/{project_id}/run-job-flow { "billing_type" : 12, "data_center" : "", "master_node_num" : 1, "master_node_size" : "s3.2xlarge.2.linux.bigdata", "core_node_num" : 1, "core_node_size" : "s3.2xlarge.2.linux.bigdata", "available_zone_id" : "0e7a368b6c54493e94ad32666b47e23e", "cluster_name" : "newcluster", "vpc" : "vpc1", "vpc_id" : "5b7db34d-3534-4a6e-ac94-023cd36aaf74", "subnet_id" : "815bece0-fd22-4b65-8a6e-15788c99ee43", "subnet_name" : "subnet", "security_groups_id" : "", "enterprise_project_id" : "0", "tags" : [ { "key" : "key1", "value" : "value1" }, { "key" : "key2", "value" : "value2" } ], "cluster_version" : "MRS 3.2.0-LTS.1", "cluster_type" : 0, "master_data_volume_type" : "SATA", "master_data_volume_size" : 600, "master_data_volume_count" : 1, "core_data_volume_type" : "SATA", "core_data_volume_size" : 600, "core_data_volume_count" : 1, "login_mode" : 1, "node_public_cert_name" : "SSHkey-bba1", "safe_mode" : 0, "cluster_admin_secret" : "******", "log_collection" : 1, "component_list" : [ { "component_name" : "Hadoop" }, { "component_name" : "Spark2x" }, { "component_name" : "HBase" }, { "component_name" : "Hive" }, { "component_name" : "Zookeeper" }, { "component_name" : "Tez" }, { "component_name" : "Hue" }, { "component_name" : "Loader" }, { "component_name" : "Flink" } ], "add_jobs" : [ { "job_type" : 1, "job_name" : "tenji111", "jar_path" : "s3a://bigdata/program/hadoop-mapreduce-examples-XXX.jar", "arguments" : "wordcount", "input" : "s3a://bigdata/input/wd_1k/", "output" : "s3a://bigdata/ouput/", "job_log" : "s3a://bigdata/log/", "shutdown_cluster" : false, "file_action" : "", "submit_job_once_cluster_run" : true, "hql" : "", "hive_script_path" : "" } ], "bootstrap_scripts" : [ { "name" : "Install zepplin", "uri" : "s3a://XXX/zeppelin_install.sh", "parameters" : "", "nodes" : [ "master" ], "active_master" : "false", "before_component_start" : "false", "start_time" : "1667892101", "state" : "IN_PROGRESS", "fail_action" : "continue", "action_stages" : [ "AFTER_SCALE_IN", "AFTER_SCALE_OUT" ] } ] }
Example Response
Status code: 200
The cluster is created.
{ "cluster_id" : "da1592c2-bb7e-468d-9ac9-83246e95447a", "result" : true, "msg" : "" }
Error Codes
See Error Codes.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot