更新时间:2024-01-23 GMT+08:00
分享

创建集群并执行作业

功能介绍

创建一个MRS集群,并在集群中提交一个作业。该接口不兼容Sahara。

(建议优先使用创建集群V2接口创建集群并提交作业V2接口来完成创建集群或创建集群并提交作业的功能)

支持同一时间并发创建10个集群。 使用接口前,您需要先获取下的资源信息。

  • 通过VPC创建或查询VPC、子网

  • 通过ECS创建或查询密钥对

  • 通过终端节点获取区域信息

  • 参考MRS服务支持的组件获取MRS版本及对应版本支持的组件信息

接口约束

  • 集群登录方式有密码和密钥对两种,两者必选其一。- 使用密码方式需要配置访问集群节点的root密码,即cluster_master_secret。- 使用密钥对方式需要配置密钥对名称,即node_public_cert_name。- 磁盘参数可以使用volume_type和volume_size表示,也可以使用多磁盘相关的参数(master_data_volume_type、master_data_volume_size、master_data_volume_count、core_data_volume_type、core_data_volume_size和core_data_volume_count)表示,以上两种方式任选一组进行配置。

调用方法

请参见如何调用API

URI

POST /v1.1/{project_id}/run-job-flow

表1 路径参数

参数

是否必选

参数类型

描述

project_id

String

项目编号。获取方法,请参见获取项目ID

请求参数

表2 请求Body参数

参数

是否必选

参数类型

描述

cluster_version

String

集群版本。 例如:MRS 3.1.0。

cluster_name

String

集群名称,不允许相同。 只能由字母、数字、中划线和下划线组成,并且长度为1~64个字符。

master_node_num

Integer

Master节点数量。启用集群高可用功能时配置为2,不启用集群高可用功能时配置为1。MRS 3.x版本暂时不支持该参数配置为1。

core_node_num

Integer

Core节点数量。

取值范围:[1~500]

Core节点默认的最大值为500,如果用户需要的Core节点数大于500,请申请扩大配额。

billing_type

Integer

集群的计费模式。

12:表示按需计费。接口调用仅支持创建按需计费集群。

data_center

String

集群区域信息,请参见终端节点及区域

vpc

String

子网所在VPC名称。 通过VPC管理控制台获取名称:

  1. 登录管理控制台。

  2. 单击“虚拟私有云”,从左侧列表选择虚拟私有云。

在“虚拟私有云”页面的列表中即可获取VPC名称。

master_node_size

String

Master节点的实例规格,例如:c3.4xlarge.2.linux.bigdata。MRS当前支持主机规格的配型由CPU+内存+Disk共同决定。实例规格详细说明请参见MRS所使用的弹性云服务器规格MRS所使用的裸金属服务器规格。 该参数建议从MRS控制台的集群创建页面获取对应区域对应版本所支持的规格。

core_node_size

String

Core节点的实例规格,例如:c3.4xlarge.2.linux.bigdata。实例规格详细说明请参见MRS所使用的弹性云服务器规格MRS所使用的裸金属服务器规格。 该参数建议从MRS控制台的集群创建页面获取对应区域对应版本所支持的规格。

component_list

Array of ComponentAmbV11 objects

服务组件安装列表信息。

available_zone_id

String

可用分区ID。 以下仅包含部分可用区ID。更多局点可通过查询可用区信息接口来获取各可用分区的ID。

  • 华北-北京一可用区1(cn-north-1a):ae04cf9d61544df3806a3feeb401b204

  • 华北-北京一可用区2(cn-north-1b):d573142f24894ef3bd3664de068b44b0

  • 华东-上海二可用区1(cn-east-2a):72d50cedc49846b9b42c21495f38d81c

  • 华东-上海二可用区2(cn-east-2b):38b0f7a602344246bcb0da47b5d548e7

  • 华东-上海二可用区3(cn-east-2c):5547fd6bf8f84bb5a7f9db062ad3d015

  • 华南-广州可用区1(cn-south-1a):34f5ff4865cf4ed6b270f15382ebdec5

  • 华南-广州可用区2(cn-south-2b):043c7e39ecb347a08dc8fcb6c35a274e

  • 华南-广州可用区3(cn-south-1c):af1687643e8c4ec1b34b688e4e3b8901

  • 华北-北京四可用区1(cn-north-4a):effdcbc7d4d64a02aa1fa26b42f56533

  • 华北-北京四可用区2(cn-north-4b):a0865121f83b41cbafce65930a22a6e8

  • 华北-北京四可用区3(cn-north-4c):2dcb154ac2724a6d92e9bcc859657c1e

vpc_id

String

子网所在VPC ID。 通过VPC管理控制台获取ID:

  1. 登录管理控制台。

  2. 单击“虚拟私有云”,从左侧列表选择虚拟私有云。

在“虚拟私有云”页面的列表中即可获取VPC ID。

subnet_id

String

子网ID。通过VPC管理控制台获取子网ID:1) 登录管理控制台。2) 单击“虚拟私有云”,从左侧列表选择虚拟私有云。3) 单击对应虚拟私有云所在行的“子网个数”查看子网。4) 单击对应子网名称,获取“网络ID”。“subnet_id”和“subnet_name”必须至少填写一个,当这两个参数同时配置但是不匹配同一个子网时,集群会创建失败,请仔细填写参数。推荐使用“subnet_id”。

subnet_name

String

子网名称。通过VPC管理控制台获取子网名称:1) 登录管理控制台。2) 单击“虚拟私有云”,从左侧列表选择虚拟私有云。3) 单击对应虚拟私有云所在行的“子网个数”查看子网,获取子网名称。“subnet_id”和“subnet_name”必须至少填写一个,当这两个参数同时配置但是不匹配同一个子网时,集群会创建失败,请仔细填写参数。当仅填写“subnet_name”一个参数且VPC下存在同名子网时,创建集群时以VPC平台第一个名称的子网为准。推荐使用“subnet_id”。

security_groups_id

String

集群安全组的ID。- 当该ID为空时MRS后台会自己创建安全组,自动创建的安全组名称以mrs_{cluster_name}开头。- 当该ID不为空时,表示使用固定安全组来创建集群,传入的ID必须是当前租户中包含的安全组ID,且该安全组中包含一条全部协议,全部端口,源地址为指定的管理面节点IP的入方向规则。

add_jobs

Array of AddJobsReqV11 objects

创建集群时可同时提交作业,当前版本暂时只支持新增一个作业。

volume_size

Integer

Master和Core节点数据磁盘存储空间。为增大数据存储容量,创建集群时可同时添加磁盘。可以根据如下应用场景合理选择磁盘存储空间大小:

  • 数据存储和计算分离,数据存储在OBS系统中,集群费用相对较低,计算性能不高,并且集群随时可以删除,建议数据计算不频繁场景下使用。

  • 数据存储和计算不分离,数据存储在HDFS中,集群费用相对较高,计算性能高,集群需要长期存在,建议数据计算频繁场景下使用。

取值范围:100GB~32000GB,传值只需填数字,不需要带单位GB。 不建议使用该参数,详情请参考volume_type参数的说明。

volume_type

String

Master和Core节点的磁盘存储类别,目前支持SATA、SAS、SSD和GPSSD。磁盘参数可以使用volume_type和volume_size表示,也可以使用多磁盘相关的参数表示。volume_type和volume_size这两个参数如果与多磁盘参数同时出现,系统优先读取volume_type和volume_size参数。建议使用多磁盘参数。 - SATA:普通IO - SAS:高IO - SSD:超高IO - GPSSD:通用型SSD

master_data_volume_type

String

该参数为多磁盘参数,表示Master节点数据磁盘存储类别,目前支持SATA、SAS、SSD和GPSSD。

master_data_volume_size

Integer

该参数为多磁盘参数,表示Master节点数据磁盘存储空间。为增大数据存储容量,创建集群时可同时添加磁盘。

取值范围:100GB~32000GB,传值只需填数字,不需要带单位GB。

master_data_volume_count

Integer

该参数为多磁盘参数,表示Master节点数据磁盘个数。取值只能是1。

core_data_volume_type

String

该参数为多磁盘参数,表示Core节点数据磁盘存储类别,目前支持SATA、SAS、SSD和GPSSD。

core_data_volume_size

Integer

该参数为多磁盘参数,表示Core节点数据磁盘存储空间。为增大数据存储容量,创建集群时可同时添加磁盘。

取值范围:100GB~32000GB,传值只需填数字,不需要带单位GB。

core_data_volume_count

Integer

该参数为多磁盘参数,表示Core节点数据磁盘个数。 取值范围:1~10

task_node_groups

Array of TaskNodeGroup objects

Task节点列表信息。

bootstrap_scripts

Array of BootstrapScript objects

配置引导操作脚本信息。

node_public_cert_name

String

密钥对名称。用户可以使用密钥对方式登录集群节点。当“login_mode”配置为“1”时,请求消息体中包含node_public_cert_name字段。

cluster_admin_secret

String

配置MRS Manager管理员用户的密码。

  • 密码长度应在8~26个字符之间

  • 不能与用户名或者倒序用户名相同

  • 必须包含如下4种字符的组合

    • 至少一个小写字母

    • 至少一个大写字母

    • 至少一个数字

    • 至少一个特殊字符:!@$%^-_=+[{}]:,./?

cluster_master_secret

String

配置访问集群节点的root密码。当“login_mode”配置为“0”时,请求消息体中包含cluster_master_secret字段。

密码设置约束如下:

  • 字符串类型,可输入的字符串长度为8-26。

  • 至少包含4种字符组合,如大写字母,小写字母,数字,特殊字符(!@$%^-_=+[{}]:,./?),但不能包含空格。

  • 不能与用户名或者倒序用户名相同。

safe_mode

Integer

MRS集群运行模式。- 0:普通集群,表示Kerberos认证关闭,用户可使用集群提供的所有功能。- 1:安全集群,表示Kerberos认证开启,普通用户无权限使用MRS集群的“文件管理”和“作业管理”功能,并且无法查看Hadoop、Spark的作业记录以及集群资源使用情况。如果需要使用集群更多功能,需要找MRS Manager的管理员分配权限。

cluster_type

Integer

集群类型。

默认值为0:分析集群。

说明:暂不支持通过接口方式创建混合集群。

枚举值:

  • 0:分析集群

  • 1:流式集群

log_collection

Integer

集群创建失败时,是否收集失败日志。

默认设置为1,将创建OBS桶仅用于MRS集群创建失败时的日志收集。

枚举值:

  • 0:不收集

  • 1:收集

enterprise_project_id

String

企业项目ID。

创建集群时,给集群绑定企业项目ID。

默认设置为0,表示为default企业项目。

获取方式请参见《企业管理API参考》的“查询企业项目列表”响应消息表“enterprise_project字段数据结构说明”的“id”。

tags

Array of Tag objects

集群的标签信息。

同一个集群最多能使用20个tag,tag的名称(key)不能重复 标签的键/值不能包含“=”,“*”,“<”,“>”,“\”,“,”,“|”,“/”。

login_mode

Integer

集群登录方式。默认设置为1。

  • 当“login_mode”配置为“0”时,请求消息体中包含cluster_master_secret字段。

  • 当“login_mode”配置为“1”时,请求消息体中包含node_public_cert_name字段。

枚举值:

  • 0:密码方式

  • 1:密钥对方式

node_groups

Array of NodeGroupV11 objects

节点列表信息。说明:如下参数和该参数任选一组进行配置即可。master_node_num、master_node_size、core_node_num、core_node_size、master_data_volume_type、master_data_volume_size、master_data_volume_count、core_data_volume_type、core_data_volume_size、core_data_volume_count、volume_type、volume_size、task_node_groups。

表3 ComponentAmbV11

参数

是否必选

参数类型

描述

component_name

String

组件名称

表4 AddJobsReqV11

参数

是否必选

参数类型

描述

job_type

Integer

作业类型码。

  • 1:MapReduce

  • 2:Spark

  • 3:Hive Script

  • 4:HiveSQL(当前不支持)

  • 5:DistCp,导入、导出数据,(当前不支持)。

  • 6:Spark Script

  • 7:Spark SQL,提交SQL语句,(当前不支持)。

job_name

String

作业名称。 只能由字母、数字、中划线和下划线组成,并且长度为1~64个字符。

说明: 不同作业的名称允许相同,但不建议设置相同。

jar_path

String

执行程序Jar包或sql文件地址,需要满足如下要求:- 最多为1023字符,不能包含;|&>,<'$特殊字符,且不可为空或全空格。- 文件可存储于HDFS或者OBS中,不同的文件系统对应的路径存在差异。 - OBS:以“s3a://”开头。不支持KMS加密的文件或程序。 - HDFS:以“/”开头。- Spark Script需要以“.sql”结尾,MapReduce和Spark Jar需要以“.jar”结尾,sql和jar不区分大小写。

arguments

String

程序执行的关键参数,该参数由用户程序内的函数指定,MRS只负责参数的传入。 最多为150000字符,不能包含;|&>'<$特殊字符,可为空。

input

String

数据输入地址。 文件可存储于HDFS或者OBS中,不同的文件系统对应的路径存在差异。

  • OBS:以“s3a://”开头。不支持KMS加密的文件或程序。

  • HDFS:以“/”开头。

最多为1023字符,不能包含;|&>'<$特殊字符,可为空。

output

String

数据输出地址。 文件可存储于HDFS或者OBS中,不同的文件系统对应的路径存在差异。

  • OBS:以“s3a://”开头。

  • HDFS:以“/”开头。

如果该路径不存在,系统会自动创建。 最多为1023字符,不能包含;|&>'<$特殊字符,可为空。

job_log

String

作业日志存储地址,该日志信息记录作业运行状态。 文件可存储于HDFS或者OBS中,不同的文件系统对应的路径存在差异。

  • OBS:以“s3a://”开头。

  • HDFS:以“/”开头。

最多为1023字符,不能包含;|&>'<$特殊字符,可为空。

hive_script_path

String

sql程序路径,仅Spark Script和Hive Script作业需要使用此参数。需要满足如下要求:

  • 最多为1023字符,不能包含;|&><'$特殊字符,且不可为空或全空格。

  • 文件可存储于HDFS或者OBS中,不同的文件系统对应的路径存在差异。

    • OBS:以“s3a://”开头。不支持KMS加密的文件或程序。

    • HDFS:以“/”开头。

  • 需要以“.sql”结尾,sql不区分大小写。

hql

String

HQL脚本语句。

shutdown_cluster

Boolean

作业执行完成后,是否删除集群。

  • true:是

  • false:否

submit_job_once_cluster_run

Boolean

  • true:创建集群同时提交作业

  • false:单独提交作业

此处应设置为true。

file_action

String

数据导入导出。

  • import

  • export

表5 TaskNodeGroup

参数

是否必选

参数类型

描述

node_num

Integer

Task节点数量,取值范围0~500,Core与Task节点总数最大为500个。

node_size

String

Task节点的实例规格,例如:c3.4xlarge.2.linux.bigdata。实例规格详细说明请参见MRS所使用的弹性云服务器规格MRS所使用的裸金属服务器规格。 该参数建议从MRS控制台的集群创建页面获取对应区域对应版本所支持的规格。

data_volume_type

String

Task节点数据磁盘存储类别,目前支持SATA、SAS和SSD。

  • SATA:普通IO

  • SAS:高IO

  • SSD:超高IO

  • GPSSD:通用型SSD

data_volume_count

Integer

Task节点数据磁盘存储数目,取值范围:0~10。

data_volume_size

Integer

Task节点数据磁盘存储大小。

取值范围:100GB~32000GB,传值只需填数字,不需要带单位GB。

auto_scaling_policy

AutoScalingPolicy object

弹性伸缩规则信息。

表6 BootstrapScript

参数

是否必选

参数类型

描述

name

String

引导操作脚本的名称,同一个集群的引导操作脚本名称不允许相同。

只能由数字、英文字符、空格、中划线和下划线组成,且不能以空格开头。

可输入的字符串长度为1~64个字符。

uri

String

引导操作脚本的路径。设置为OBS桶的路径或虚拟机本地的路径。- OBS桶的路径:直接手动输入脚本路径。例如输入MRS提供的公共样例脚本路径。示例:s3a://bootstrap/presto/presto-install.sh,其中安装dualroles时,presto-install.sh脚本参数为dualroles, 安装worker时,presto-install.sh脚本参数为worker。根据Presto使用习惯,建议您在Active Master节点上安装dualroles,在Core节点上安装worker。- 虚拟机本地的路径:用户需要输入正确的脚本路径。脚本所在的路径必须以‘/’开头,以.sh结尾。

parameters

String

引导操作脚本参数。

nodes

Array of strings

引导操作脚本所执行的节点类型,包含master、core和task三种类型。说明:节点类型必须为小写字母。

active_master

Boolean

引导操作脚本是否只运行在主Master节点上。 缺省值为false,表示引导操作脚本可运行在所有Master节点上。

fail_action

String

引导操作脚本执行失败后,是否继续执行后续脚本和创建集群。

缺省值为errorout,表示终止操作。

说明: 建议您在调试阶段设置为“继续”,无论此引导操作是否执行成功,则集群都能继续安装和启动。

枚举值:

  • continue:继续执行后续脚本。

  • errorout:终止操作。

before_component_start

Boolean

引导操作脚本执行的时间。目前支持“组件启动前”和“组件启动后”两种类型。 缺省值为false,表示引导操作脚本在组件启动后执行。

start_time

Long

单个引导操作脚本的执行时间。

state

String

单个引导操作脚本的运行状态。

  • PENDING

  • IN_PROGRESS

  • SUCCESS

  • FAILURE

action_stages

Array of strings

选择引导操作脚本执行的时间。

  • BEFORE_COMPONENT_FIRST_START: 组件首次启动后

  • AFTER_COMPONENT_FIRST_START: 组件首次启动前

  • BEFORE_SCALE_IN: 缩容前

  • AFTER_SCALE_IN: 缩容后

  • BEFORE_SCALE_OUT: 扩容前

  • AFTER_SCALE_OUT: 扩容后

表7 Tag

参数

是否必选

参数类型

描述

key

String

键。- 最大长度128个字符,不能为空字符串。- 标签的key值不能包含非打印字符ASCII(0-31),“=”,“*”,“<”,“>”,“\”,“,”,“|”,“/”,且首尾字符不能为空格。- 同一资源的key值不能重复。

value

String

值。- 最大长度255个字符,可以为空字符串。- 标签的value值不能包含非打印字符ASCII(0-31),“=”,“*”,“<”,“>”,“\”,“,”,“|”,“/”,且首尾字符不能为空格。

表8 NodeGroupV11

参数

是否必选

参数类型

描述

group_name

String

节点组名。

  • master_node_default_group

  • core_node_analysis_group

  • core_node_streaming_group

  • task_node_analysis_group

  • task_node_streaming_group

node_num

Integer

节点数量,取值范围0~500,Core与Task节点总数最大为500个。

node_size

String

节点的实例规格,例如:c3.4xlarge.2.linux.bigdata。MRS当前支持主机规格的配型由CPU+内存+Disk共同决定。实例规格详细说明请参见MRS所使用的弹性云服务器规格MRS所使用的裸金属服务器规格。 该参数建议从MRS控制台的集群创建页面获取对应区域对应版本所支持的规格。

root_volume_size

String

节点系统磁盘存储大小。

root_volume_type

String

节点系统磁盘存储类别,目前支持SATA、SAS和SSD。

  • SATA:普通IO

  • SAS:高IO

  • SSD:超高IO

  • GPSSD:通用型SSD

data_volume_type

String

节点数据磁盘存储类别,目前支持SATA、SAS和SSD。

  • SATA:普通IO

  • SAS:高IO

  • SSD:超高IO

  • GPSSD:通用型SSD

data_volume_count

Integer

节点数据磁盘存储数目,取值范围:0~10。

data_volume_size

Integer

节点数据磁盘存储大小 取值范围:100GB~32000GB。

auto_scaling_policy

AutoScalingPolicy object

当“group_name”配置为“task_node_analysis_group”或“task_node_streaming_group”时该参数有效,表示弹性伸缩规则信息。

表9 AutoScalingPolicy

参数

是否必选

参数类型

描述

auto_scaling_enable

Boolean

当前自动伸缩规则是否开启。

min_capacity

Integer

指定该节点组的最小保留节点数。

取值范围:[0~500]

max_capacity

Integer

指定该节点组的最大节点数。

取值范围:[0~500]

resources_plans

Array of ResourcesPlan objects

资源计划列表。若该参数为空表示不启用资源计划。

当启用弹性伸缩时,资源计划与自动伸缩规则需至少配置其中一种。

rules

Array of Rule objects

自动伸缩的规则列表。

当启用弹性伸缩时,资源计划与自动伸缩规则需至少配置其中一种。

exec_scripts

Array of ScaleScript objects

弹性伸缩自定义自动化脚本列表。若该参数为空表示不启用自动化脚本。

表10 ResourcesPlan

参数

是否必选

参数类型

描述

period_type

String

资源计划的周期类型,当前只允许以下类型:

daily

start_time

String

资源计划的起始时间,格式为“hour:minute”,表示时间在0:00-23:59之间。

end_time

String

资源计划的结束时间,格式与“start_time”相同,不早于start_time表示的时间,且与start_time间隔不小于30min。

min_capacity

Integer

资源计划内该节点组的最小保留节点数。

取值范围:[0~500]

max_capacity

Integer

资源计划内该节点组的最大保留节点数。

取值范围:[0~500]

表11 Rule

参数

是否必选

参数类型

描述

name

String

弹性伸缩规则的名称。

只能由字母、数字、中划线和下划线组成,并且长度为1~64个字符。

在一个节点组范围内,不允许重名。

description

String

弹性伸缩规则的说明。

最大长度为1024字符。

adjustment_type

String

弹性伸缩规则的调整类型,只允许以下类型:

枚举值:

  • scale_out:扩容

  • scale_in:缩容

cool_down_minutes

Integer

触发弹性伸缩规则后,该集群处于冷却状态(不再执行弹性伸缩操作)的时长,单位为分钟。

取值范围[0~10080],10080为一周的分钟数。

scaling_adjustment

Integer

单次调整集群节点的个数。

取值范围[1~100]

trigger

Trigger object

描述该规则触发条件。

表12 Trigger

参数

是否必选

参数类型

描述

metric_name

String

指标名称。

该触发条件会依据该名称对应指标的值来进行判断。

最大长度为64个字符。

详细指标名称内容请参见"配置MRS集群弹性伸缩"

metric_value

String

指标阈值。

触发该条件的指标阈值,只允许输入整数或者带两位小数的数。

comparison_operator

String

指标判断逻辑运算符,包括:

  • LT:小于

  • GT:大于

  • LTOE:小于等于

  • GTOE:大于等于

evaluation_periods

Integer

判断连续满足指标阈值的周期数(一个周期为5分钟)。

取值范围[1~288]

表13 ScaleScript

参数

是否必选

参数类型

描述

name

String

弹性伸缩自定义自动化脚本的名称,同一个集群的自定义自动化脚本名称不允许相同。

只能由数字、英文字符、空格、中划线和下划线组成,且不能以空格开头。

可输入的字符串长度为1~64个字符。

uri

String

自定义自动化脚本的路径。设置为OBS桶的路径或虚拟机本地的路径。

  • OBS桶的路径:直接手动输入脚本路径。示例:s3a://XXX/scale.sh

  • 虚拟机本地的路径:用户需要输入正确的脚本路径。脚本所在的路径必须以‘/’开头,以.sh结尾。

parameters

String

自定义自动化脚本参数。

多个参数间用空格隔开。 可以传入以下系统预定义参数:

  • ${mrs_scale_node_num}:扩缩容节点数

  • ${mrs_scale_type}:扩缩容类型,扩容为scale_out,缩容为scale_in

  • ${mrs_scale_node_hostnames}:扩缩容的节点主机名称

  • ${mrs_scale_node_ips}:扩缩容的节点IP

  • ${mrs_scale_rule_name}:触发扩缩容的规则名

其他用户自定义参数使用方式与普通shell脚本相同,多个参数中间用空格隔开。

nodes

Array of strings

自定义自动化脚本所执行的节点组名称(非自定义集群也可使用节点类型,包含Master、Core和Task三种类型)。

active_master

Boolean

自定义自动化脚本是否只运行在主Master节点上。

缺省值为false,表示自定义自动化脚本可运行在所有Master节点上。

fail_action

String

自定义自动化脚本执行失败后,是否继续执行后续脚本和创建集群。

说明:

  • 建议您在调试阶段设置为“continue”,无论此自定义自动化脚本是否执行成功,则集群都能继续安装和启动。

  • 由于缩容成功无法回滚,因此缩容后执行的脚本“fail_action”必须设置为“continue”。

枚举值:

  • continue:继续执行后续脚本。

  • errorout:终止操作。

action_stage

String

脚本执行时机。

枚举值:

  • before_scale_out:扩容前

  • before_scale_in:缩容前

  • after_scale_out:扩容后

  • after_scale_in:缩容后

响应参数

状态码: 200

表14 响应Body参数

参数

参数类型

描述

result

Boolean

操作结果。

  • true:操作成功

  • false:操作失败

msg

String

系统提示信息,可为空。

cluster_id

String

集群创建成功后系统返回的集群ID值。

请求示例

  • 使用node_groups参数组,创建一个启用“集群高可用”功能的集群,集群版本号为MRS 3.1.0。

    POST https://{endpoint}/v1.1/{project_id}/run-job-flow
    
    {
      "billing_type" : 12,
      "data_center" : "",
      "available_zone_id" : "d573142f24894ef3bd3664de068b44b0",
      "cluster_name" : "mrs_HEbK",
      "cluster_version" : "MRS 3.1.0",
      "safe_mode" : 0,
      "cluster_type" : 0,
      "component_list" : [ {
        "component_name" : "Hadoop"
      }, {
        "component_name" : "Spark"
      }, {
        "component_name" : "HBase"
      }, {
        "component_name" : "Hive"
      }, {
        "component_name" : "Presto"
      }, {
        "component_name" : "Tez"
      }, {
        "component_name" : "Hue"
      }, {
        "component_name" : "Loader"
      }, {
        "component_name" : "Flink"
      } ],
      "vpc" : "vpc-4b1c",
      "vpc_id" : "4a365717-67be-4f33-80c5-98e98a813af8",
      "subnet_id" : "67984709-e15e-4e86-9886-d76712d4e00a",
      "subnet_name" : "subnet-4b44",
      "security_groups_id" : "4820eace-66ad-4f2c-8d46-cf340e3029dd",
      "enterprise_project_id" : "0",
      "tags" : [ {
        "key" : "key1",
        "value" : "value1"
      }, {
        "key" : "key2",
        "value" : "value2"
      } ],
      "node_groups" : [ {
        "group_name" : "master_node_default_group",
        "node_num" : 2,
        "node_size" : "s3.xlarge.2.linux.bigdata",
        "root_volume_size" : 480,
        "root_volume_type" : "SATA",
        "data_volume_type" : "SATA",
        "data_volume_count" : 1,
        "data_volume_size" : 600
      }, {
        "group_name" : "core_node_analysis_group",
        "node_num" : 3,
        "node_size" : "s3.xlarge.2.linux.bigdata",
        "root_volume_size" : 480,
        "root_volume_type" : "SATA",
        "data_volume_type" : "SATA",
        "data_volume_count" : 1,
        "data_volume_size" : 600
      }, {
        "group_name" : "task_node_analysis_group",
        "node_num" : 2,
        "node_size" : "s3.xlarge.2.linux.bigdata",
        "root_volume_size" : 480,
        "root_volume_type" : "SATA",
        "data_volume_type" : "SATA",
        "data_volume_count" : 0,
        "data_volume_size" : 600,
        "auto_scaling_policy" : {
          "auto_scaling_enable" : true,
          "min_capacity" : 1,
          "max_capacity" : "3",
          "resources_plans" : [ {
            "period_type" : "daily",
            "start_time" : "9:50",
            "end_time" : "10:20",
            "min_capacity" : 2,
            "max_capacity" : 3
          }, {
            "period_type" : "daily",
            "start_time" : "10:20",
            "end_time" : "12:30",
            "min_capacity" : 0,
            "max_capacity" : 2
          } ],
          "exec_scripts" : [ {
            "name" : "before_scale_out",
            "uri" : "s3a://XXX/zeppelin_install.sh",
            "parameters" : "${mrs_scale_node_num} ${mrs_scale_type} xxx",
            "nodes" : [ "master", "core", "task" ],
            "active_master" : "true",
            "action_stage" : "before_scale_out",
            "fail_action" : "continue"
          }, {
            "name" : "after_scale_out",
            "uri" : "s3a://XXX/storm_rebalance.sh",
            "parameters" : "${mrs_scale_node_hostnames} ${mrs_scale_node_ips}",
            "nodes" : [ "master", "core", "task" ],
            "active_master" : "true",
            "action_stage" : "after_scale_out",
            "fail_action" : "continue"
          } ],
          "rules" : [ {
            "name" : "default-expand-1",
            "adjustment_type" : "scale_out",
            "cool_down_minutes" : 5,
            "scaling_adjustment" : 1,
            "trigger" : {
              "metric_name" : "YARNMemoryAvailablePercentage",
              "metric_value" : "25",
              "comparison_operator" : "LT",
              "evaluation_periods" : 10
            }
          }, {
            "name" : "default-shrink-1",
            "adjustment_type" : "scale_in",
            "cool_down_minutes" : 5,
            "scaling_adjustment" : 1,
            "trigger" : {
              "metric_name" : "YARNMemoryAvailablePercentage",
              "metric_value" : "70",
              "comparison_operator" : "GT",
              "evaluation_periods" : 10
            }
          } ]
        }
      } ],
      "login_mode" : 1,
      "cluster_master_secret" : "",
      "cluster_admin_secret" : "",
      "log_collection" : 1,
      "add_jobs" : [ {
        "job_type" : 1,
        "job_name" : "tenji111",
        "jar_path" : "s3a://bigdata/program/hadoop-mapreduce-examples-2.7.2.jar",
        "arguments" : "wordcount",
        "input" : "s3a://bigdata/input/wd_1k/",
        "output" : "s3a://bigdata/ouput/",
        "job_log" : "s3a://bigdata/log/",
        "shutdown_cluster" : true,
        "file_action" : "",
        "submit_job_once_cluster_run" : true,
        "hql" : "",
        "hive_script_path" : ""
      } ],
      "bootstrap_scripts" : [ {
        "name" : "Modify os config",
        "uri" : "s3a://XXX/modify_os_config.sh",
        "parameters" : "param1 param2",
        "nodes" : [ "master", "core", "task" ],
        "active_master" : "false",
        "before_component_start" : "true",
        "start_time" : "1667892101",
        "state" : "IN_PROGRESS",
        "fail_action" : "continue",
        "action_stages" : [ "BEFORE_COMPONENT_FIRST_START", "BEFORE_SCALE_IN" ]
      }, {
        "name" : "Install zepplin",
        "uri" : "s3a://XXX/zeppelin_install.sh",
        "parameters" : "",
        "nodes" : [ "master" ],
        "active_master" : "true",
        "before_component_start" : "false",
        "start_time" : "1667892101",
        "state" : "IN_PROGRESS",
        "fail_action" : "continue",
        "action_stages" : [ "AFTER_SCALE_IN", "AFTER_SCALE_OUT" ]
      } ]
    }
  • 不使用node_groups参数组,创建一个启用“集群高可用”功能的集群,集群版本号为MRS 3.1.0。

    POST https://{endpoint}/v1.1/{project_id}/run-job-flow
    
    {
      "billing_type" : 12,
      "data_center" : "",
      "master_node_num" : 2,
      "master_node_size" : "s3.2xlarge.2.linux.bigdata",
      "core_node_num" : 3,
      "core_node_size" : "s1.xlarge.linux.bigdata",
      "available_zone_id" : "d573142f24894ef3bd3664de068b44b0",
      "cluster_name" : "newcluster",
      "vpc" : "vpc1",
      "vpc_id" : "5b7db34d-3534-4a6e-ac94-023cd36aaf74",
      "subnet_id" : "815bece0-fd22-4b65-8a6e-15788c99ee43",
      "subnet_name" : "subnet",
      "security_groups_id" : "845bece1-fd22-4b45-7a6e-14338c99ee43",
      "tags" : [ {
        "key" : "key1",
        "value" : "value1"
      }, {
        "key" : "key2",
        "value" : "value2"
      } ],
      "cluster_version" : "MRS 3.1.0",
      "cluster_type" : 0,
      "master_data_volume_type" : "SATA",
      "master_data_volume_size" : 600,
      "master_data_volume_count" : 1,
      "core_data_volume_type" : "SATA",
      "core_data_volume_size" : 600,
      "core_data_volume_count" : 2,
      "node_public_cert_name" : "SSHkey-bba1",
      "safe_mode" : 0,
      "log_collection" : 1,
      "task_node_groups" : [ {
        "node_num" : 2,
        "node_size" : "s3.xlarge.2.linux.bigdata",
        "data_volume_type" : "SATA",
        "data_volume_count" : 1,
        "data_volume_size" : 600,
        "auto_scaling_policy" : {
          "auto_scaling_enable" : true,
          "min_capacity" : 1,
          "max_capacity" : "3",
          "resources_plans" : [ {
            "period_type" : "daily",
            "start_time" : "9: 50",
            "end_time" : "10: 20",
            "min_capacity" : 2,
            "max_capacity" : 3
          }, {
            "period_type" : "daily",
            "start_time" : "10: 20",
            "end_time" : "12: 30",
            "min_capacity" : 0,
            "max_capacity" : 2
          } ],
          "exec_scripts" : [ {
            "name" : "before_scale_out",
            "uri" : "s3a: //XXX/zeppelin_install.sh",
            "parameters" : "${mrs_scale_node_num}${mrs_scale_type}xxx",
            "nodes" : [ "master", "core", "task" ],
            "active_master" : "true",
            "action_stage" : "before_scale_out",
            "fail_action" : "continue"
          }, {
            "name" : "after_scale_out",
            "uri" : "s3a: //XXX/storm_rebalance.sh",
            "parameters" : "${mrs_scale_node_hostnames}${mrs_scale_node_ips}",
            "nodes" : [ "master", "core", "task" ],
            "active_master" : "true",
            "action_stage" : "after_scale_out",
            "fail_action" : "continue"
          } ],
          "rules" : [ {
            "name" : "default-expand-1",
            "adjustment_type" : "scale_out",
            "cool_down_minutes" : 5,
            "scaling_adjustment" : 1,
            "trigger" : {
              "metric_name" : "YARNMemoryAvailablePercentage",
              "metric_value" : "25",
              "comparison_operator" : "LT",
              "evaluation_periods" : 10
            }
          }, {
            "name" : "default-shrink-1",
            "adjustment_type" : "scale_in",
            "cool_down_minutes" : 5,
            "scaling_adjustment" : 1,
            "trigger" : {
              "metric_name" : "YARNMemoryAvailablePercentage",
              "metric_value" : "70",
              "comparison_operator" : "GT",
              "evaluation_periods" : 10
            }
          } ]
        }
      } ],
      "component_list" : [ {
        "component_name" : "Hadoop"
      }, {
        "component_name" : "Spark"
      }, {
        "component_name" : "HBase"
      }, {
        "component_name" : "Hive"
      } ],
      "add_jobs" : [ {
        "job_type" : 1,
        "job_name" : "tenji111",
        "jar_path" : "s3a: //bigdata/program/hadoop-mapreduce-examples-2.7.2.jar",
        "arguments" : "wordcount",
        "input" : "s3a: //bigdata/input/wd_1k/",
        "output" : "s3a: //bigdata/ouput/",
        "job_log" : "s3a: //bigdata/log/",
        "shutdown_cluster" : true,
        "file_action" : "",
        "submit_job_once_cluster_run" : true,
        "hql" : "",
        "hive_script_path" : ""
      } ],
      "bootstrap_scripts" : [ {
        "name" : "Modifyosconfig",
        "uri" : "s3a: //XXX/modify_os_config.sh",
        "parameters" : "param1param2",
        "nodes" : [ "master", "core", "task" ],
        "active_master" : "false",
        "before_component_start" : "true",
        "start_time" : "1667892101",
        "state" : "IN_PROGRESS",
        "fail_action" : "continue",
        "action_stages" : [ "BEFORE_COMPONENT_FIRST_START", "BEFORE_SCALE_IN" ]
      }, {
        "name" : "Installzepplin",
        "uri" : "s3a: //XXX/zeppelin_install.sh",
        "parameters" : "",
        "nodes" : [ "master" ],
        "active_master" : "true",
        "before_component_start" : "false",
        "start_time" : "1667892101",
        "state" : "IN_PROGRESS",
        "fail_action" : "continue",
        "action_stages" : [ "AFTER_SCALE_IN", "AFTER_SCALE_OUT" ]
      } ]
    }
  • 使用node_groups参数组,创建一个关闭“集群高可用”功能、最小规格的集群,集群版本号为MRS 3.1.0。

    POST https://{endpoint}/v1.1/{project_id}/run-job-flow
    
    {
      "billing_type" : 12,
      "data_center" : "",
      "available_zone_id" : "d573142f24894ef3bd3664de068b44b0",
      "cluster_name" : "mrs_HEbK",
      "cluster_version" : "MRS 3.1.0",
      "safe_mode" : 0,
      "cluster_type" : 0,
      "component_list" : [ {
        "component_name" : "Hadoop"
      }, {
        "component_name" : "Spark"
      }, {
        "component_name" : "HBase"
      }, {
        "component_name" : "Hive"
      }, {
        "component_name" : "Presto"
      }, {
        "component_name" : "Tez"
      }, {
        "component_name" : "Hue"
      }, {
        "component_name" : "Loader"
      }, {
        "component_name" : "Flink"
      } ],
      "vpc" : "vpc-4b1c",
      "vpc_id" : "4a365717-67be-4f33-80c5-98e98a813af8",
      "subnet_id" : "67984709-e15e-4e86-9886-d76712d4e00a",
      "subnet_name" : "subnet-4b44",
      "security_groups_id" : "4820eace-66ad-4f2c-8d46-cf340e3029dd",
      "enterprise_project_id" : "0",
      "tags" : [ {
        "key" : "key1",
        "value" : "value1"
      }, {
        "key" : "key2",
        "value" : "value2"
      } ],
      "node_groups" : [ {
        "group_name" : "master_node_default_group",
        "node_num" : 1,
        "node_size" : "s3.xlarge.2.linux.bigdata",
        "root_volume_size" : 480,
        "root_volume_type" : "SATA",
        "data_volume_type" : "SATA",
        "data_volume_count" : 1,
        "data_volume_size" : 600
      }, {
        "group_name" : "core_node_analysis_group",
        "node_num" : 1,
        "node_size" : "s3.xlarge.2.linux.bigdata",
        "root_volume_size" : 480,
        "root_volume_type" : "SATA",
        "data_volume_type" : "SATA",
        "data_volume_count" : 1,
        "data_volume_size" : 600
      } ],
      "login_mode" : 1,
      "cluster_master_secret" : "",
      "cluster_admin_secret" : "",
      "log_collection" : 1,
      "add_jobs" : [ {
        "job_type" : 1,
        "job_name" : "tenji111",
        "jar_path" : "s3a://bigdata/program/hadoop-mapreduce-examples-2.7.2.jar",
        "arguments" : "wordcount",
        "input" : "s3a://bigdata/input/wd_1k/",
        "output" : "s3a://bigdata/ouput/",
        "job_log" : "s3a://bigdata/log/",
        "shutdown_cluster" : true,
        "file_action" : "",
        "submit_job_once_cluster_run" : true,
        "hql" : "",
        "hive_script_path" : ""
      } ],
      "bootstrap_scripts" : [ {
        "name" : "Modify os config",
        "uri" : "s3a://XXX/modify_os_config.sh",
        "parameters" : "param1 param2",
        "nodes" : [ "master", "core", "task" ],
        "active_master" : "false",
        "before_component_start" : "true",
        "start_time" : "1667892101",
        "state" : "IN_PROGRESS",
        "fail_action" : "continue",
        "action_stages" : [ "BEFORE_COMPONENT_FIRST_START", "BEFORE_SCALE_IN" ]
      }, {
        "name" : "Install zepplin",
        "uri" : "s3a://XXX/zeppelin_install.sh",
        "parameters" : "",
        "nodes" : [ "master" ],
        "active_master" : "true",
        "before_component_start" : "false",
        "start_time" : "1667892101",
        "state" : "IN_PROGRESS",
        "fail_action" : "continue",
        "action_stages" : [ "AFTER_SCALE_IN", "AFTER_SCALE_OUT" ]
      } ]
    }
  • 不使用node_groups参数组,创建一个关闭“集群高可用”功能、最小规格的集群,集群版本号为MRS 3.1.0。

    POST https://{endpoint}/v1.1/{project_id}/run-job-flow
    
    {
      "billing_type" : 12,
      "data_center" : "",
      "master_node_num" : 1,
      "master_node_size" : "s3.2xlarge.2.linux.bigdata",
      "core_node_num" : 1,
      "core_node_size" : "s1.xlarge.linux.bigdata",
      "available_zone_id" : "d573142f24894ef3bd3664de068b44b0",
      "cluster_name" : "newcluster",
      "vpc" : "vpc1",
      "vpc_id" : "5b7db34d-3534-4a6e-ac94-023cd36aaf74",
      "subnet_id" : "815bece0-fd22-4b65-8a6e-15788c99ee43",
      "subnet_name" : "subnet",
      "security_groups_id" : "",
      "enterprise_project_id" : "0",
      "tags" : [ {
        "key" : "key1",
        "value" : "value1"
      }, {
        "key" : "key2",
        "value" : "value2"
      } ],
      "cluster_version" : "MRS 3.1.0",
      "cluster_type" : 0,
      "master_data_volume_type" : "SATA",
      "master_data_volume_size" : 600,
      "master_data_volume_count" : 1,
      "core_data_volume_type" : "SATA",
      "core_data_volume_size" : 600,
      "core_data_volume_count" : 1,
      "login_mode" : 1,
      "node_public_cert_name" : "SSHkey-bba1",
      "safe_mode" : 0,
      "cluster_admin_secret" : "******",
      "log_collection" : 1,
      "component_list" : [ {
        "component_name" : "Hadoop"
      }, {
        "component_name" : "Spark"
      }, {
        "component_name" : "HBase"
      }, {
        "component_name" : "Hive"
      }, {
        "component_name" : "Presto"
      }, {
        "component_name" : "Tez"
      }, {
        "component_name" : "Hue"
      }, {
        "component_name" : "Loader"
      }, {
        "component_name" : "Flink"
      } ],
      "add_jobs" : [ {
        "job_type" : 1,
        "job_name" : "tenji111",
        "jar_path" : "s3a://bigdata/program/hadoop-mapreduce-examples-XXX.jar",
        "arguments" : "wordcount",
        "input" : "s3a://bigdata/input/wd_1k/",
        "output" : "s3a://bigdata/ouput/",
        "job_log" : "s3a://bigdata/log/",
        "shutdown_cluster" : false,
        "file_action" : "",
        "submit_job_once_cluster_run" : true,
        "hql" : "",
        "hive_script_path" : ""
      } ],
      "bootstrap_scripts" : [ {
        "name" : "Install zepplin",
        "uri" : "s3a://XXX/zeppelin_install.sh",
        "parameters" : "",
        "nodes" : [ "master" ],
        "active_master" : "false",
        "before_component_start" : "false",
        "start_time" : "1667892101",
        "state" : "IN_PROGRESS",
        "fail_action" : "continue",
        "action_stages" : [ "AFTER_SCALE_IN", "AFTER_SCALE_OUT" ]
      } ]
    }

响应示例

状态码: 200

创建集群成功。

{
  "cluster_id" : "da1592c2-bb7e-468d-9ac9-83246e95447a",
  "result" : true,
  "msg" : ""
}

SDK代码示例

SDK代码示例如下。

  • 使用node_groups参数组,创建一个启用“集群高可用”功能的集群,集群版本号为MRS 3.1.0。

      1
      2
      3
      4
      5
      6
      7
      8
      9
     10
     11
     12
     13
     14
     15
     16
     17
     18
     19
     20
     21
     22
     23
     24
     25
     26
     27
     28
     29
     30
     31
     32
     33
     34
     35
     36
     37
     38
     39
     40
     41
     42
     43
     44
     45
     46
     47
     48
     49
     50
     51
     52
     53
     54
     55
     56
     57
     58
     59
     60
     61
     62
     63
     64
     65
     66
     67
     68
     69
     70
     71
     72
     73
     74
     75
     76
     77
     78
     79
     80
     81
     82
     83
     84
     85
     86
     87
     88
     89
     90
     91
     92
     93
     94
     95
     96
     97
     98
     99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    159
    160
    161
    162
    163
    164
    165
    166
    167
    168
    169
    170
    171
    172
    173
    174
    175
    176
    177
    178
    179
    180
    181
    182
    183
    184
    185
    186
    187
    188
    189
    190
    191
    192
    193
    194
    195
    196
    197
    198
    199
    200
    201
    202
    203
    204
    205
    206
    207
    208
    209
    210
    211
    212
    213
    214
    215
    216
    217
    218
    219
    220
    221
    222
    223
    224
    225
    226
    227
    228
    229
    230
    231
    232
    233
    234
    235
    236
    237
    238
    239
    240
    241
    242
    243
    244
    245
    246
    247
    248
    249
    250
    251
    252
    253
    254
    255
    256
    257
    258
    259
    260
    261
    262
    263
    264
    265
    266
    267
    268
    269
    270
    271
    272
    273
    274
    275
    276
    277
    278
    279
    280
    281
    282
    283
    284
    285
    286
    287
    288
    289
    package com.huaweicloud.sdk.test;
    
    import com.huaweicloud.sdk.core.auth.ICredential;
    import com.huaweicloud.sdk.core.auth.BasicCredentials;
    import com.huaweicloud.sdk.core.exception.ConnectionException;
    import com.huaweicloud.sdk.core.exception.RequestTimeoutException;
    import com.huaweicloud.sdk.core.exception.ServiceResponseException;
    import com.huaweicloud.sdk.mrs.v1.region.MrsRegion;
    import com.huaweicloud.sdk.mrs.v1.*;
    import com.huaweicloud.sdk.mrs.v1.model.*;
    
    import java.util.List;
    import java.util.ArrayList;
    
    public class CreateClusterSolution {
    
        public static void main(String[] args) {
            // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
            // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
            String ak = System.getenv("CLOUD_SDK_AK");
            String sk = System.getenv("CLOUD_SDK_SK");
    
            ICredential auth = new BasicCredentials()
                    .withAk(ak)
                    .withSk(sk);
    
            MrsClient client = MrsClient.newBuilder()
                    .withCredential(auth)
                    .withRegion(MrsRegion.valueOf("<YOUR REGION>"))
                    .build();
            CreateClusterRequest request = new CreateClusterRequest();
            CreateClusterReqV11 body = new CreateClusterReqV11();
            List<String> listExecScriptsNodes = new ArrayList<>();
            listExecScriptsNodes.add("master");
            listExecScriptsNodes.add("core");
            listExecScriptsNodes.add("task");
            List<String> listExecScriptsNodes1 = new ArrayList<>();
            listExecScriptsNodes1.add("master");
            listExecScriptsNodes1.add("core");
            listExecScriptsNodes1.add("task");
            List<ScaleScript> listAutoScalingPolicyExecScripts = new ArrayList<>();
            listAutoScalingPolicyExecScripts.add(
                new ScaleScript()
                    .withName("before_scale_out")
                    .withUri("s3a://XXX/zeppelin_install.sh")
                    .withParameters("${mrs_scale_node_num} ${mrs_scale_type} xxx")
                    .withNodes(listExecScriptsNodes1)
                    .withActiveMaster(true)
                    .withFailAction(ScaleScript.FailActionEnum.fromValue("continue"))
                    .withActionStage(ScaleScript.ActionStageEnum.fromValue("before_scale_out"))
            );
            listAutoScalingPolicyExecScripts.add(
                new ScaleScript()
                    .withName("after_scale_out")
                    .withUri("s3a://XXX/storm_rebalance.sh")
                    .withParameters("${mrs_scale_node_hostnames} ${mrs_scale_node_ips}")
                    .withNodes(listExecScriptsNodes)
                    .withActiveMaster(true)
                    .withFailAction(ScaleScript.FailActionEnum.fromValue("continue"))
                    .withActionStage(ScaleScript.ActionStageEnum.fromValue("after_scale_out"))
            );
            Trigger triggerRules = new Trigger();
            triggerRules.withMetricName("YARNMemoryAvailablePercentage")
                .withMetricValue("70")
                .withComparisonOperator("GT")
                .withEvaluationPeriods(10);
            Trigger triggerRules1 = new Trigger();
            triggerRules1.withMetricName("YARNMemoryAvailablePercentage")
                .withMetricValue("25")
                .withComparisonOperator("LT")
                .withEvaluationPeriods(10);
            List<Rule> listAutoScalingPolicyRules = new ArrayList<>();
            listAutoScalingPolicyRules.add(
                new Rule()
                    .withName("default-expand-1")
                    .withAdjustmentType(Rule.AdjustmentTypeEnum.fromValue("scale_out"))
                    .withCoolDownMinutes(5)
                    .withScalingAdjustment(1)
                    .withTrigger(triggerRules1)
            );
            listAutoScalingPolicyRules.add(
                new Rule()
                    .withName("default-shrink-1")
                    .withAdjustmentType(Rule.AdjustmentTypeEnum.fromValue("scale_in"))
                    .withCoolDownMinutes(5)
                    .withScalingAdjustment(1)
                    .withTrigger(triggerRules)
            );
            List<ResourcesPlan> listAutoScalingPolicyResourcesPlans = new ArrayList<>();
            listAutoScalingPolicyResourcesPlans.add(
                new ResourcesPlan()
                    .withPeriodType("daily")
                    .withStartTime("9:50")
                    .withEndTime("10:20")
                    .withMinCapacity(2)
                    .withMaxCapacity(3)
            );
            listAutoScalingPolicyResourcesPlans.add(
                new ResourcesPlan()
                    .withPeriodType("daily")
                    .withStartTime("10:20")
                    .withEndTime("12:30")
                    .withMinCapacity(0)
                    .withMaxCapacity(2)
            );
            AutoScalingPolicy autoScalingPolicyNodeGroups = new AutoScalingPolicy();
            autoScalingPolicyNodeGroups.withAutoScalingEnable(true)
                .withMinCapacity(1)
                .withMaxCapacity(3)
                .withResourcesPlans(listAutoScalingPolicyResourcesPlans)
                .withRules(listAutoScalingPolicyRules)
                .withExecScripts(listAutoScalingPolicyExecScripts);
            List<NodeGroupV11> listbodyNodeGroups = new ArrayList<>();
            listbodyNodeGroups.add(
                new NodeGroupV11()
                    .withGroupName("master_node_default_group")
                    .withNodeNum(2)
                    .withNodeSize("s3.xlarge.2.linux.bigdata")
                    .withRootVolumeSize("480")
                    .withRootVolumeType("SATA")
                    .withDataVolumeType("SATA")
                    .withDataVolumeCount(1)
                    .withDataVolumeSize(600)
            );
            listbodyNodeGroups.add(
                new NodeGroupV11()
                    .withGroupName("core_node_analysis_group")
                    .withNodeNum(3)
                    .withNodeSize("s3.xlarge.2.linux.bigdata")
                    .withRootVolumeSize("480")
                    .withRootVolumeType("SATA")
                    .withDataVolumeType("SATA")
                    .withDataVolumeCount(1)
                    .withDataVolumeSize(600)
            );
            listbodyNodeGroups.add(
                new NodeGroupV11()
                    .withGroupName("task_node_analysis_group")
                    .withNodeNum(2)
                    .withNodeSize("s3.xlarge.2.linux.bigdata")
                    .withRootVolumeSize("480")
                    .withRootVolumeType("SATA")
                    .withDataVolumeType("SATA")
                    .withDataVolumeCount(0)
                    .withDataVolumeSize(600)
                    .withAutoScalingPolicy(autoScalingPolicyNodeGroups)
            );
            List<Tag> listbodyTags = new ArrayList<>();
            listbodyTags.add(
                new Tag()
                    .withKey("key1")
                    .withValue("value1")
            );
            listbodyTags.add(
                new Tag()
                    .withKey("key2")
                    .withValue("value2")
            );
            List<BootstrapScript.ActionStagesEnum> listBootstrapScriptsActionStages = new ArrayList<>();
            listBootstrapScriptsActionStages.add(BootstrapScript.ActionStagesEnum.fromValue("AFTER_SCALE_IN"));
            listBootstrapScriptsActionStages.add(BootstrapScript.ActionStagesEnum.fromValue("AFTER_SCALE_OUT"));
            List<String> listBootstrapScriptsNodes = new ArrayList<>();
            listBootstrapScriptsNodes.add("master");
            List<BootstrapScript.ActionStagesEnum> listBootstrapScriptsActionStages1 = new ArrayList<>();
            listBootstrapScriptsActionStages1.add(BootstrapScript.ActionStagesEnum.fromValue("BEFORE_COMPONENT_FIRST_START"));
            listBootstrapScriptsActionStages1.add(BootstrapScript.ActionStagesEnum.fromValue("BEFORE_SCALE_IN"));
            List<String> listBootstrapScriptsNodes1 = new ArrayList<>();
            listBootstrapScriptsNodes1.add("master");
            listBootstrapScriptsNodes1.add("core");
            listBootstrapScriptsNodes1.add("task");
            List<BootstrapScript> listbodyBootstrapScripts = new ArrayList<>();
            listbodyBootstrapScripts.add(
                new BootstrapScript()
                    .withName("Modify os config")
                    .withUri("s3a://XXX/modify_os_config.sh")
                    .withParameters("param1 param2")
                    .withNodes(listBootstrapScriptsNodes1)
                    .withActiveMaster(false)
                    .withFailAction(BootstrapScript.FailActionEnum.fromValue("continue"))
                    .withBeforeComponentStart(true)
                    .withStartTime(1667892101L)
                    .withState(BootstrapScript.StateEnum.fromValue("IN_PROGRESS"))
                    .withActionStages(listBootstrapScriptsActionStages1)
            );
            listbodyBootstrapScripts.add(
                new BootstrapScript()
                    .withName("Install zepplin")
                    .withUri("s3a://XXX/zeppelin_install.sh")
                    .withParameters("")
                    .withNodes(listBootstrapScriptsNodes)
                    .withActiveMaster(true)
                    .withFailAction(BootstrapScript.FailActionEnum.fromValue("continue"))
                    .withBeforeComponentStart(false)
                    .withStartTime(1667892101L)
                    .withState(BootstrapScript.StateEnum.fromValue("IN_PROGRESS"))
                    .withActionStages(listBootstrapScriptsActionStages)
            );
            List<AddJobsReqV11> listbodyAddJobs = new ArrayList<>();
            listbodyAddJobs.add(
                new AddJobsReqV11()
                    .withJobType(1)
                    .withJobName("tenji111")
                    .withJarPath("s3a://bigdata/program/hadoop-mapreduce-examples-2.7.2.jar")
                    .withArguments("wordcount")
                    .withInput("s3a://bigdata/input/wd_1k/")
                    .withOutput("s3a://bigdata/ouput/")
                    .withJobLog("s3a://bigdata/log/")
                    .withHiveScriptPath("")
                    .withHql("")
                    .withShutdownCluster(true)
                    .withSubmitJobOnceClusterRun(true)
                    .withFileAction("")
            );
            List<ComponentAmbV11> listbodyComponentList = new ArrayList<>();
            listbodyComponentList.add(
                new ComponentAmbV11()
                    .withComponentName("Hadoop")
            );
            listbodyComponentList.add(
                new ComponentAmbV11()
                    .withComponentName("Spark")
            );
            listbodyComponentList.add(
                new ComponentAmbV11()
                    .withComponentName("HBase")
            );
            listbodyComponentList.add(
                new ComponentAmbV11()
                    .withComponentName("Hive")
            );
            listbodyComponentList.add(
                new ComponentAmbV11()
                    .withComponentName("Presto")
            );
            listbodyComponentList.add(
                new ComponentAmbV11()
                    .withComponentName("Tez")
            );
            listbodyComponentList.add(
                new ComponentAmbV11()
                    .withComponentName("Hue")
            );
            listbodyComponentList.add(
                new ComponentAmbV11()
                    .withComponentName("Loader")
            );
            listbodyComponentList.add(
                new ComponentAmbV11()
                    .withComponentName("Flink")
            );
            body.withNodeGroups(listbodyNodeGroups);
            body.withLoginMode(CreateClusterReqV11.LoginModeEnum.NUMBER_1);
            body.withTags(listbodyTags);
            body.withEnterpriseProjectId("0");
            body.withLogCollection(CreateClusterReqV11.LogCollectionEnum.NUMBER_1);
            body.withClusterType(CreateClusterReqV11.ClusterTypeEnum.NUMBER_0);
            body.withSafeMode(CreateClusterReqV11.SafeModeEnum.NUMBER_0);
            body.withClusterMasterSecret("");
            body.withClusterAdminSecret("");
            body.withBootstrapScripts(listbodyBootstrapScripts);
            body.withAddJobs(listbodyAddJobs);
            body.withSecurityGroupsId("4820eace-66ad-4f2c-8d46-cf340e3029dd");
            body.withSubnetName("subnet-4b44");
            body.withSubnetId("67984709-e15e-4e86-9886-d76712d4e00a");
            body.withVpcId("4a365717-67be-4f33-80c5-98e98a813af8");
            body.withAvailableZoneId("d573142f24894ef3bd3664de068b44b0");
            body.withComponentList(listbodyComponentList);
            body.withVpc("vpc-4b1c");
            body.withDataCenter("");
            body.withBillingType(CreateClusterReqV11.BillingTypeEnum.NUMBER_12);
            body.withClusterName("mrs_HEbK");
            body.withClusterVersion("MRS 3.1.0");
            request.withBody(body);
            try {
                CreateClusterResponse response = client.createCluster(request);
                System.out.println(response.toString());
            } catch (ConnectionException e) {
                e.printStackTrace();
            } catch (RequestTimeoutException e) {
                e.printStackTrace();
            } catch (ServiceResponseException e) {
                e.printStackTrace();
                System.out.println(e.getHttpStatusCode());
                System.out.println(e.getRequestId());
                System.out.println(e.getErrorCode());
                System.out.println(e.getErrorMsg());
            }
        }
    }
    
  • 不使用node_groups参数组,创建一个启用“集群高可用”功能的集群,集群版本号为MRS 3.1.0。

      1
      2
      3
      4
      5
      6
      7
      8
      9
     10
     11
     12
     13
     14
     15
     16
     17
     18
     19
     20
     21
     22
     23
     24
     25
     26
     27
     28
     29
     30
     31
     32
     33
     34
     35
     36
     37
     38
     39
     40
     41
     42
     43
     44
     45
     46
     47
     48
     49
     50
     51
     52
     53
     54
     55
     56
     57
     58
     59
     60
     61
     62
     63
     64
     65
     66
     67
     68
     69
     70
     71
     72
     73
     74
     75
     76
     77
     78
     79
     80
     81
     82
     83
     84
     85
     86
     87
     88
     89
     90
     91
     92
     93
     94
     95
     96
     97
     98
     99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    159
    160
    161
    162
    163
    164
    165
    166
    167
    168
    169
    170
    171
    172
    173
    174
    175
    176
    177
    178
    179
    180
    181
    182
    183
    184
    185
    186
    187
    188
    189
    190
    191
    192
    193
    194
    195
    196
    197
    198
    199
    200
    201
    202
    203
    204
    205
    206
    207
    208
    209
    210
    211
    212
    213
    214
    215
    216
    217
    218
    219
    220
    221
    222
    223
    224
    225
    226
    227
    228
    229
    230
    231
    232
    233
    234
    235
    236
    237
    238
    239
    240
    241
    242
    243
    244
    245
    246
    247
    248
    249
    250
    251
    package com.huaweicloud.sdk.test;
    
    import com.huaweicloud.sdk.core.auth.ICredential;
    import com.huaweicloud.sdk.core.auth.BasicCredentials;
    import com.huaweicloud.sdk.core.exception.ConnectionException;
    import com.huaweicloud.sdk.core.exception.RequestTimeoutException;
    import com.huaweicloud.sdk.core.exception.ServiceResponseException;
    import com.huaweicloud.sdk.mrs.v1.region.MrsRegion;
    import com.huaweicloud.sdk.mrs.v1.*;
    import com.huaweicloud.sdk.mrs.v1.model.*;
    
    import java.util.List;
    import java.util.ArrayList;
    
    public class CreateClusterSolution {
    
        public static void main(String[] args) {
            // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
            // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
            String ak = System.getenv("CLOUD_SDK_AK");
            String sk = System.getenv("CLOUD_SDK_SK");
    
            ICredential auth = new BasicCredentials()
                    .withAk(ak)
                    .withSk(sk);
    
            MrsClient client = MrsClient.newBuilder()
                    .withCredential(auth)
                    .withRegion(MrsRegion.valueOf("<YOUR REGION>"))
                    .build();
            CreateClusterRequest request = new CreateClusterRequest();
            CreateClusterReqV11 body = new CreateClusterReqV11();
            List<Tag> listbodyTags = new ArrayList<>();
            listbodyTags.add(
                new Tag()
                    .withKey("key1")
                    .withValue("value1")
            );
            listbodyTags.add(
                new Tag()
                    .withKey("key2")
                    .withValue("value2")
            );
            List<BootstrapScript.ActionStagesEnum> listBootstrapScriptsActionStages = new ArrayList<>();
            listBootstrapScriptsActionStages.add(BootstrapScript.ActionStagesEnum.fromValue("AFTER_SCALE_IN"));
            listBootstrapScriptsActionStages.add(BootstrapScript.ActionStagesEnum.fromValue("AFTER_SCALE_OUT"));
            List<String> listBootstrapScriptsNodes = new ArrayList<>();
            listBootstrapScriptsNodes.add("master");
            List<BootstrapScript.ActionStagesEnum> listBootstrapScriptsActionStages1 = new ArrayList<>();
            listBootstrapScriptsActionStages1.add(BootstrapScript.ActionStagesEnum.fromValue("BEFORE_COMPONENT_FIRST_START"));
            listBootstrapScriptsActionStages1.add(BootstrapScript.ActionStagesEnum.fromValue("BEFORE_SCALE_IN"));
            List<String> listBootstrapScriptsNodes1 = new ArrayList<>();
            listBootstrapScriptsNodes1.add("master");
            listBootstrapScriptsNodes1.add("core");
            listBootstrapScriptsNodes1.add("task");
            List<BootstrapScript> listbodyBootstrapScripts = new ArrayList<>();
            listbodyBootstrapScripts.add(
                new BootstrapScript()
                    .withName("Modifyosconfig")
                    .withUri("s3a: //XXX/modify_os_config.sh")
                    .withParameters("param1param2")
                    .withNodes(listBootstrapScriptsNodes1)
                    .withActiveMaster(false)
                    .withFailAction(BootstrapScript.FailActionEnum.fromValue("continue"))
                    .withBeforeComponentStart(true)
                    .withStartTime(1667892101L)
                    .withState(BootstrapScript.StateEnum.fromValue("IN_PROGRESS"))
                    .withActionStages(listBootstrapScriptsActionStages1)
            );
            listbodyBootstrapScripts.add(
                new BootstrapScript()
                    .withName("Installzepplin")
                    .withUri("s3a: //XXX/zeppelin_install.sh")
                    .withParameters("")
                    .withNodes(listBootstrapScriptsNodes)
                    .withActiveMaster(true)
                    .withFailAction(BootstrapScript.FailActionEnum.fromValue("continue"))
                    .withBeforeComponentStart(false)
                    .withStartTime(1667892101L)
                    .withState(BootstrapScript.StateEnum.fromValue("IN_PROGRESS"))
                    .withActionStages(listBootstrapScriptsActionStages)
            );
            List<String> listExecScriptsNodes = new ArrayList<>();
            listExecScriptsNodes.add("master");
            listExecScriptsNodes.add("core");
            listExecScriptsNodes.add("task");
            List<String> listExecScriptsNodes1 = new ArrayList<>();
            listExecScriptsNodes1.add("master");
            listExecScriptsNodes1.add("core");
            listExecScriptsNodes1.add("task");
            List<ScaleScript> listAutoScalingPolicyExecScripts = new ArrayList<>();
            listAutoScalingPolicyExecScripts.add(
                new ScaleScript()
                    .withName("before_scale_out")
                    .withUri("s3a: //XXX/zeppelin_install.sh")
                    .withParameters("${mrs_scale_node_num}${mrs_scale_type}xxx")
                    .withNodes(listExecScriptsNodes1)
                    .withActiveMaster(true)
                    .withFailAction(ScaleScript.FailActionEnum.fromValue("continue"))
                    .withActionStage(ScaleScript.ActionStageEnum.fromValue("before_scale_out"))
            );
            listAutoScalingPolicyExecScripts.add(
                new ScaleScript()
                    .withName("after_scale_out")
                    .withUri("s3a: //XXX/storm_rebalance.sh")
                    .withParameters("${mrs_scale_node_hostnames}${mrs_scale_node_ips}")
                    .withNodes(listExecScriptsNodes)
                    .withActiveMaster(true)
                    .withFailAction(ScaleScript.FailActionEnum.fromValue("continue"))
                    .withActionStage(ScaleScript.ActionStageEnum.fromValue("after_scale_out"))
            );
            Trigger triggerRules = new Trigger();
            triggerRules.withMetricName("YARNMemoryAvailablePercentage")
                .withMetricValue("70")
                .withComparisonOperator("GT")
                .withEvaluationPeriods(10);
            Trigger triggerRules1 = new Trigger();
            triggerRules1.withMetricName("YARNMemoryAvailablePercentage")
                .withMetricValue("25")
                .withComparisonOperator("LT")
                .withEvaluationPeriods(10);
            List<Rule> listAutoScalingPolicyRules = new ArrayList<>();
            listAutoScalingPolicyRules.add(
                new Rule()
                    .withName("default-expand-1")
                    .withAdjustmentType(Rule.AdjustmentTypeEnum.fromValue("scale_out"))
                    .withCoolDownMinutes(5)
                    .withScalingAdjustment(1)
                    .withTrigger(triggerRules1)
            );
            listAutoScalingPolicyRules.add(
                new Rule()
                    .withName("default-shrink-1")
                    .withAdjustmentType(Rule.AdjustmentTypeEnum.fromValue("scale_in"))
                    .withCoolDownMinutes(5)
                    .withScalingAdjustment(1)
                    .withTrigger(triggerRules)
            );
            List<ResourcesPlan> listAutoScalingPolicyResourcesPlans = new ArrayList<>();
            listAutoScalingPolicyResourcesPlans.add(
                new ResourcesPlan()
                    .withPeriodType("daily")
                    .withStartTime("9: 50")
                    .withEndTime("10: 20")
                    .withMinCapacity(2)
                    .withMaxCapacity(3)
            );
            listAutoScalingPolicyResourcesPlans.add(
                new ResourcesPlan()
                    .withPeriodType("daily")
                    .withStartTime("10: 20")
                    .withEndTime("12: 30")
                    .withMinCapacity(0)
                    .withMaxCapacity(2)
            );
            AutoScalingPolicy autoScalingPolicyTaskNodeGroups = new AutoScalingPolicy();
            autoScalingPolicyTaskNodeGroups.withAutoScalingEnable(true)
                .withMinCapacity(1)
                .withMaxCapacity(3)
                .withResourcesPlans(listAutoScalingPolicyResourcesPlans)
                .withRules(listAutoScalingPolicyRules)
                .withExecScripts(listAutoScalingPolicyExecScripts);
            List<TaskNodeGroup> listbodyTaskNodeGroups = new ArrayList<>();
            listbodyTaskNodeGroups.add(
                new TaskNodeGroup()
                    .withNodeNum(2)
                    .withNodeSize("s3.xlarge.2.linux.bigdata")
                    .withDataVolumeType(TaskNodeGroup.DataVolumeTypeEnum.fromValue("SATA"))
                    .withDataVolumeCount(1)
                    .withDataVolumeSize(600)
                    .withAutoScalingPolicy(autoScalingPolicyTaskNodeGroups)
            );
            List<AddJobsReqV11> listbodyAddJobs = new ArrayList<>();
            listbodyAddJobs.add(
                new AddJobsReqV11()
                    .withJobType(1)
                    .withJobName("tenji111")
                    .withJarPath("s3a: //bigdata/program/hadoop-mapreduce-examples-2.7.2.jar")
                    .withArguments("wordcount")
                    .withInput("s3a: //bigdata/input/wd_1k/")
                    .withOutput("s3a: //bigdata/ouput/")
                    .withJobLog("s3a: //bigdata/log/")
                    .withHiveScriptPath("")
                    .withHql("")
                    .withShutdownCluster(true)
                    .withSubmitJobOnceClusterRun(true)
                    .withFileAction("")
            );
            List<ComponentAmbV11> listbodyComponentList = new ArrayList<>();
            listbodyComponentList.add(
                new ComponentAmbV11()
                    .withComponentName("Hadoop")
            );
            listbodyComponentList.add(
                new ComponentAmbV11()
                    .withComponentName("Spark")
            );
            listbodyComponentList.add(
                new ComponentAmbV11()
                    .withComponentName("HBase")
            );
            listbodyComponentList.add(
                new ComponentAmbV11()
                    .withComponentName("Hive")
            );
            body.withTags(listbodyTags);
            body.withLogCollection(CreateClusterReqV11.LogCollectionEnum.NUMBER_1);
            body.withClusterType(CreateClusterReqV11.ClusterTypeEnum.NUMBER_0);
            body.withSafeMode(CreateClusterReqV11.SafeModeEnum.NUMBER_0);
            body.withNodePublicCertName("SSHkey-bba1");
            body.withBootstrapScripts(listbodyBootstrapScripts);
            body.withTaskNodeGroups(listbodyTaskNodeGroups);
            body.withCoreDataVolumeCount(2);
            body.withCoreDataVolumeSize(600);
            body.withCoreDataVolumeType(CreateClusterReqV11.CoreDataVolumeTypeEnum.fromValue("SATA"));
            body.withMasterDataVolumeCount(CreateClusterReqV11.MasterDataVolumeCountEnum.NUMBER_1);
            body.withMasterDataVolumeSize(600);
            body.withMasterDataVolumeType(CreateClusterReqV11.MasterDataVolumeTypeEnum.fromValue("SATA"));
            body.withAddJobs(listbodyAddJobs);
            body.withSecurityGroupsId("845bece1-fd22-4b45-7a6e-14338c99ee43");
            body.withSubnetName("subnet");
            body.withSubnetId("815bece0-fd22-4b65-8a6e-15788c99ee43");
            body.withVpcId("5b7db34d-3534-4a6e-ac94-023cd36aaf74");
            body.withAvailableZoneId("d573142f24894ef3bd3664de068b44b0");
            body.withComponentList(listbodyComponentList);
            body.withCoreNodeSize("s1.xlarge.linux.bigdata");
            body.withMasterNodeSize("s3.2xlarge.2.linux.bigdata");
            body.withVpc("vpc1");
            body.withDataCenter("");
            body.withBillingType(CreateClusterReqV11.BillingTypeEnum.NUMBER_12);
            body.withCoreNodeNum(3);
            body.withMasterNodeNum(2);
            body.withClusterName("newcluster");
            body.withClusterVersion("MRS 3.1.0");
            request.withBody(body);
            try {
                CreateClusterResponse response = client.createCluster(request);
                System.out.println(response.toString());
            } catch (ConnectionException e) {
                e.printStackTrace();
            } catch (RequestTimeoutException e) {
                e.printStackTrace();
            } catch (ServiceResponseException e) {
                e.printStackTrace();
                System.out.println(e.getHttpStatusCode());
                System.out.println(e.getRequestId());
                System.out.println(e.getErrorCode());
                System.out.println(e.getErrorMsg());
            }
        }
    }
    
  • 使用node_groups参数组,创建一个关闭“集群高可用”功能、最小规格的集群,集群版本号为MRS 3.1.0。

      1
      2
      3
      4
      5
      6
      7
      8
      9
     10
     11
     12
     13
     14
     15
     16
     17
     18
     19
     20
     21
     22
     23
     24
     25
     26
     27
     28
     29
     30
     31
     32
     33
     34
     35
     36
     37
     38
     39
     40
     41
     42
     43
     44
     45
     46
     47
     48
     49
     50
     51
     52
     53
     54
     55
     56
     57
     58
     59
     60
     61
     62
     63
     64
     65
     66
     67
     68
     69
     70
     71
     72
     73
     74
     75
     76
     77
     78
     79
     80
     81
     82
     83
     84
     85
     86
     87
     88
     89
     90
     91
     92
     93
     94
     95
     96
     97
     98
     99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    159
    160
    161
    162
    163
    164
    165
    166
    167
    168
    169
    170
    171
    172
    173
    174
    175
    176
    177
    178
    179
    180
    181
    182
    183
    184
    185
    186
    187
    188
    189
    190
    191
    192
    193
    194
    195
    196
    197
    package com.huaweicloud.sdk.test;
    
    import com.huaweicloud.sdk.core.auth.ICredential;
    import com.huaweicloud.sdk.core.auth.BasicCredentials;
    import com.huaweicloud.sdk.core.exception.ConnectionException;
    import com.huaweicloud.sdk.core.exception.RequestTimeoutException;
    import com.huaweicloud.sdk.core.exception.ServiceResponseException;
    import com.huaweicloud.sdk.mrs.v1.region.MrsRegion;
    import com.huaweicloud.sdk.mrs.v1.*;
    import com.huaweicloud.sdk.mrs.v1.model.*;
    
    import java.util.List;
    import java.util.ArrayList;
    
    public class CreateClusterSolution {
    
        public static void main(String[] args) {
            // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
            // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
            String ak = System.getenv("CLOUD_SDK_AK");
            String sk = System.getenv("CLOUD_SDK_SK");
    
            ICredential auth = new BasicCredentials()
                    .withAk(ak)
                    .withSk(sk);
    
            MrsClient client = MrsClient.newBuilder()
                    .withCredential(auth)
                    .withRegion(MrsRegion.valueOf("<YOUR REGION>"))
                    .build();
            CreateClusterRequest request = new CreateClusterRequest();
            CreateClusterReqV11 body = new CreateClusterReqV11();
            List<NodeGroupV11> listbodyNodeGroups = new ArrayList<>();
            listbodyNodeGroups.add(
                new NodeGroupV11()
                    .withGroupName("master_node_default_group")
                    .withNodeNum(1)
                    .withNodeSize("s3.xlarge.2.linux.bigdata")
                    .withRootVolumeSize("480")
                    .withRootVolumeType("SATA")
                    .withDataVolumeType("SATA")
                    .withDataVolumeCount(1)
                    .withDataVolumeSize(600)
            );
            listbodyNodeGroups.add(
                new NodeGroupV11()
                    .withGroupName("core_node_analysis_group")
                    .withNodeNum(1)
                    .withNodeSize("s3.xlarge.2.linux.bigdata")
                    .withRootVolumeSize("480")
                    .withRootVolumeType("SATA")
                    .withDataVolumeType("SATA")
                    .withDataVolumeCount(1)
                    .withDataVolumeSize(600)
            );
            List<Tag> listbodyTags = new ArrayList<>();
            listbodyTags.add(
                new Tag()
                    .withKey("key1")
                    .withValue("value1")
            );
            listbodyTags.add(
                new Tag()
                    .withKey("key2")
                    .withValue("value2")
            );
            List<BootstrapScript.ActionStagesEnum> listBootstrapScriptsActionStages = new ArrayList<>();
            listBootstrapScriptsActionStages.add(BootstrapScript.ActionStagesEnum.fromValue("AFTER_SCALE_IN"));
            listBootstrapScriptsActionStages.add(BootstrapScript.ActionStagesEnum.fromValue("AFTER_SCALE_OUT"));
            List<String> listBootstrapScriptsNodes = new ArrayList<>();
            listBootstrapScriptsNodes.add("master");
            List<BootstrapScript.ActionStagesEnum> listBootstrapScriptsActionStages1 = new ArrayList<>();
            listBootstrapScriptsActionStages1.add(BootstrapScript.ActionStagesEnum.fromValue("BEFORE_COMPONENT_FIRST_START"));
            listBootstrapScriptsActionStages1.add(BootstrapScript.ActionStagesEnum.fromValue("BEFORE_SCALE_IN"));
            List<String> listBootstrapScriptsNodes1 = new ArrayList<>();
            listBootstrapScriptsNodes1.add("master");
            listBootstrapScriptsNodes1.add("core");
            listBootstrapScriptsNodes1.add("task");
            List<BootstrapScript> listbodyBootstrapScripts = new ArrayList<>();
            listbodyBootstrapScripts.add(
                new BootstrapScript()
                    .withName("Modify os config")
                    .withUri("s3a://XXX/modify_os_config.sh")
                    .withParameters("param1 param2")
                    .withNodes(listBootstrapScriptsNodes1)
                    .withActiveMaster(false)
                    .withFailAction(BootstrapScript.FailActionEnum.fromValue("continue"))
                    .withBeforeComponentStart(true)
                    .withStartTime(1667892101L)
                    .withState(BootstrapScript.StateEnum.fromValue("IN_PROGRESS"))
                    .withActionStages(listBootstrapScriptsActionStages1)
            );
            listbodyBootstrapScripts.add(
                new BootstrapScript()
                    .withName("Install zepplin")
                    .withUri("s3a://XXX/zeppelin_install.sh")
                    .withParameters("")
                    .withNodes(listBootstrapScriptsNodes)
                    .withActiveMaster(true)
                    .withFailAction(BootstrapScript.FailActionEnum.fromValue("continue"))
                    .withBeforeComponentStart(false)
                    .withStartTime(1667892101L)
                    .withState(BootstrapScript.StateEnum.fromValue("IN_PROGRESS"))
                    .withActionStages(listBootstrapScriptsActionStages)
            );
            List<AddJobsReqV11> listbodyAddJobs = new ArrayList<>();
            listbodyAddJobs.add(
                new AddJobsReqV11()
                    .withJobType(1)
                    .withJobName("tenji111")
                    .withJarPath("s3a://bigdata/program/hadoop-mapreduce-examples-2.7.2.jar")
                    .withArguments("wordcount")
                    .withInput("s3a://bigdata/input/wd_1k/")
                    .withOutput("s3a://bigdata/ouput/")
                    .withJobLog("s3a://bigdata/log/")
                    .withHiveScriptPath("")
                    .withHql("")
                    .withShutdownCluster(true)
                    .withSubmitJobOnceClusterRun(true)
                    .withFileAction("")
            );
            List<ComponentAmbV11> listbodyComponentList = new ArrayList<>();
            listbodyComponentList.add(
                new ComponentAmbV11()
                    .withComponentName("Hadoop")
            );
            listbodyComponentList.add(
                new ComponentAmbV11()
                    .withComponentName("Spark")
            );
            listbodyComponentList.add(
                new ComponentAmbV11()
                    .withComponentName("HBase")
            );
            listbodyComponentList.add(
                new ComponentAmbV11()
                    .withComponentName("Hive")
            );
            listbodyComponentList.add(
                new ComponentAmbV11()
                    .withComponentName("Presto")
            );
            listbodyComponentList.add(
                new ComponentAmbV11()
                    .withComponentName("Tez")
            );
            listbodyComponentList.add(
                new ComponentAmbV11()
                    .withComponentName("Hue")
            );
            listbodyComponentList.add(
                new ComponentAmbV11()
                    .withComponentName("Loader")
            );
            listbodyComponentList.add(
                new ComponentAmbV11()
                    .withComponentName("Flink")
            );
            body.withNodeGroups(listbodyNodeGroups);
            body.withLoginMode(CreateClusterReqV11.LoginModeEnum.NUMBER_1);
            body.withTags(listbodyTags);
            body.withEnterpriseProjectId("0");
            body.withLogCollection(CreateClusterReqV11.LogCollectionEnum.NUMBER_1);
            body.withClusterType(CreateClusterReqV11.ClusterTypeEnum.NUMBER_0);
            body.withSafeMode(CreateClusterReqV11.SafeModeEnum.NUMBER_0);
            body.withClusterMasterSecret("");
            body.withClusterAdminSecret("");
            body.withBootstrapScripts(listbodyBootstrapScripts);
            body.withAddJobs(listbodyAddJobs);
            body.withSecurityGroupsId("4820eace-66ad-4f2c-8d46-cf340e3029dd");
            body.withSubnetName("subnet-4b44");
            body.withSubnetId("67984709-e15e-4e86-9886-d76712d4e00a");
            body.withVpcId("4a365717-67be-4f33-80c5-98e98a813af8");
            body.withAvailableZoneId("d573142f24894ef3bd3664de068b44b0");
            body.withComponentList(listbodyComponentList);
            body.withVpc("vpc-4b1c");
            body.withDataCenter("");
            body.withBillingType(CreateClusterReqV11.BillingTypeEnum.NUMBER_12);
            body.withClusterName("mrs_HEbK");
            body.withClusterVersion("MRS 3.1.0");
            request.withBody(body);
            try {
                CreateClusterResponse response = client.createCluster(request);
                System.out.println(response.toString());
            } catch (ConnectionException e) {
                e.printStackTrace();
            } catch (RequestTimeoutException e) {
                e.printStackTrace();
            } catch (ServiceResponseException e) {
                e.printStackTrace();
                System.out.println(e.getHttpStatusCode());
                System.out.println(e.getRequestId());
                System.out.println(e.getErrorCode());
                System.out.println(e.getErrorMsg());
            }
        }
    }
    
  • 不使用node_groups参数组,创建一个关闭“集群高可用”功能、最小规格的集群,集群版本号为MRS 3.1.0。

      1
      2
      3
      4
      5
      6
      7
      8
      9
     10
     11
     12
     13
     14
     15
     16
     17
     18
     19
     20
     21
     22
     23
     24
     25
     26
     27
     28
     29
     30
     31
     32
     33
     34
     35
     36
     37
     38
     39
     40
     41
     42
     43
     44
     45
     46
     47
     48
     49
     50
     51
     52
     53
     54
     55
     56
     57
     58
     59
     60
     61
     62
     63
     64
     65
     66
     67
     68
     69
     70
     71
     72
     73
     74
     75
     76
     77
     78
     79
     80
     81
     82
     83
     84
     85
     86
     87
     88
     89
     90
     91
     92
     93
     94
     95
     96
     97
     98
     99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    159
    160
    161
    162
    163
    package com.huaweicloud.sdk.test;
    
    import com.huaweicloud.sdk.core.auth.ICredential;
    import com.huaweicloud.sdk.core.auth.BasicCredentials;
    import com.huaweicloud.sdk.core.exception.ConnectionException;
    import com.huaweicloud.sdk.core.exception.RequestTimeoutException;
    import com.huaweicloud.sdk.core.exception.ServiceResponseException;
    import com.huaweicloud.sdk.mrs.v1.region.MrsRegion;
    import com.huaweicloud.sdk.mrs.v1.*;
    import com.huaweicloud.sdk.mrs.v1.model.*;
    
    import java.util.List;
    import java.util.ArrayList;
    
    public class CreateClusterSolution {
    
        public static void main(String[] args) {
            // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
            // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
            String ak = System.getenv("CLOUD_SDK_AK");
            String sk = System.getenv("CLOUD_SDK_SK");
    
            ICredential auth = new BasicCredentials()
                    .withAk(ak)
                    .withSk(sk);
    
            MrsClient client = MrsClient.newBuilder()
                    .withCredential(auth)
                    .withRegion(MrsRegion.valueOf("<YOUR REGION>"))
                    .build();
            CreateClusterRequest request = new CreateClusterRequest();
            CreateClusterReqV11 body = new CreateClusterReqV11();
            List<Tag> listbodyTags = new ArrayList<>();
            listbodyTags.add(
                new Tag()
                    .withKey("key1")
                    .withValue("value1")
            );
            listbodyTags.add(
                new Tag()
                    .withKey("key2")
                    .withValue("value2")
            );
            List<BootstrapScript.ActionStagesEnum> listBootstrapScriptsActionStages = new ArrayList<>();
            listBootstrapScriptsActionStages.add(BootstrapScript.ActionStagesEnum.fromValue("AFTER_SCALE_IN"));
            listBootstrapScriptsActionStages.add(BootstrapScript.ActionStagesEnum.fromValue("AFTER_SCALE_OUT"));
            List<String> listBootstrapScriptsNodes = new ArrayList<>();
            listBootstrapScriptsNodes.add("master");
            List<BootstrapScript> listbodyBootstrapScripts = new ArrayList<>();
            listbodyBootstrapScripts.add(
                new BootstrapScript()
                    .withName("Install zepplin")
                    .withUri("s3a://XXX/zeppelin_install.sh")
                    .withParameters("")
                    .withNodes(listBootstrapScriptsNodes)
                    .withActiveMaster(false)
                    .withFailAction(BootstrapScript.FailActionEnum.fromValue("continue"))
                    .withBeforeComponentStart(false)
                    .withStartTime(1667892101L)
                    .withState(BootstrapScript.StateEnum.fromValue("IN_PROGRESS"))
                    .withActionStages(listBootstrapScriptsActionStages)
            );
            List<AddJobsReqV11> listbodyAddJobs = new ArrayList<>();
            listbodyAddJobs.add(
                new AddJobsReqV11()
                    .withJobType(1)
                    .withJobName("tenji111")
                    .withJarPath("s3a://bigdata/program/hadoop-mapreduce-examples-XXX.jar")
                    .withArguments("wordcount")
                    .withInput("s3a://bigdata/input/wd_1k/")
                    .withOutput("s3a://bigdata/ouput/")
                    .withJobLog("s3a://bigdata/log/")
                    .withHiveScriptPath("")
                    .withHql("")
                    .withShutdownCluster(false)
                    .withSubmitJobOnceClusterRun(true)
                    .withFileAction("")
            );
            List<ComponentAmbV11> listbodyComponentList = new ArrayList<>();
            listbodyComponentList.add(
                new ComponentAmbV11()
                    .withComponentName("Hadoop")
            );
            listbodyComponentList.add(
                new ComponentAmbV11()
                    .withComponentName("Spark")
            );
            listbodyComponentList.add(
                new ComponentAmbV11()
                    .withComponentName("HBase")
            );
            listbodyComponentList.add(
                new ComponentAmbV11()
                    .withComponentName("Hive")
            );
            listbodyComponentList.add(
                new ComponentAmbV11()
                    .withComponentName("Presto")
            );
            listbodyComponentList.add(
                new ComponentAmbV11()
                    .withComponentName("Tez")
            );
            listbodyComponentList.add(
                new ComponentAmbV11()
                    .withComponentName("Hue")
            );
            listbodyComponentList.add(
                new ComponentAmbV11()
                    .withComponentName("Loader")
            );
            listbodyComponentList.add(
                new ComponentAmbV11()
                    .withComponentName("Flink")
            );
            body.withLoginMode(CreateClusterReqV11.LoginModeEnum.NUMBER_1);
            body.withTags(listbodyTags);
            body.withEnterpriseProjectId("0");
            body.withLogCollection(CreateClusterReqV11.LogCollectionEnum.NUMBER_1);
            body.withClusterType(CreateClusterReqV11.ClusterTypeEnum.NUMBER_0);
            body.withSafeMode(CreateClusterReqV11.SafeModeEnum.NUMBER_0);
            body.withClusterAdminSecret("******");
            body.withNodePublicCertName("SSHkey-bba1");
            body.withBootstrapScripts(listbodyBootstrapScripts);
            body.withCoreDataVolumeCount(1);
            body.withCoreDataVolumeSize(600);
            body.withCoreDataVolumeType(CreateClusterReqV11.CoreDataVolumeTypeEnum.fromValue("SATA"));
            body.withMasterDataVolumeCount(CreateClusterReqV11.MasterDataVolumeCountEnum.NUMBER_1);
            body.withMasterDataVolumeSize(600);
            body.withMasterDataVolumeType(CreateClusterReqV11.MasterDataVolumeTypeEnum.fromValue("SATA"));
            body.withAddJobs(listbodyAddJobs);
            body.withSecurityGroupsId("");
            body.withSubnetName("subnet");
            body.withSubnetId("815bece0-fd22-4b65-8a6e-15788c99ee43");
            body.withVpcId("5b7db34d-3534-4a6e-ac94-023cd36aaf74");
            body.withAvailableZoneId("d573142f24894ef3bd3664de068b44b0");
            body.withComponentList(listbodyComponentList);
            body.withCoreNodeSize("s1.xlarge.linux.bigdata");
            body.withMasterNodeSize("s3.2xlarge.2.linux.bigdata");
            body.withVpc("vpc1");
            body.withDataCenter("");
            body.withBillingType(CreateClusterReqV11.BillingTypeEnum.NUMBER_12);
            body.withCoreNodeNum(1);
            body.withMasterNodeNum(1);
            body.withClusterName("newcluster");
            body.withClusterVersion("MRS 3.1.0");
            request.withBody(body);
            try {
                CreateClusterResponse response = client.createCluster(request);
                System.out.println(response.toString());
            } catch (ConnectionException e) {
                e.printStackTrace();
            } catch (RequestTimeoutException e) {
                e.printStackTrace();
            } catch (ServiceResponseException e) {
                e.printStackTrace();
                System.out.println(e.getHttpStatusCode());
                System.out.println(e.getRequestId());
                System.out.println(e.getErrorCode());
                System.out.println(e.getErrorMsg());
            }
        }
    }
    
  • 使用node_groups参数组,创建一个启用“集群高可用”功能的集群,集群版本号为MRS 3.1.0。

      1
      2
      3
      4
      5
      6
      7
      8
      9
     10
     11
     12
     13
     14
     15
     16
     17
     18
     19
     20
     21
     22
     23
     24
     25
     26
     27
     28
     29
     30
     31
     32
     33
     34
     35
     36
     37
     38
     39
     40
     41
     42
     43
     44
     45
     46
     47
     48
     49
     50
     51
     52
     53
     54
     55
     56
     57
     58
     59
     60
     61
     62
     63
     64
     65
     66
     67
     68
     69
     70
     71
     72
     73
     74
     75
     76
     77
     78
     79
     80
     81
     82
     83
     84
     85
     86
     87
     88
     89
     90
     91
     92
     93
     94
     95
     96
     97
     98
     99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    159
    160
    161
    162
    163
    164
    165
    166
    167
    168
    169
    170
    171
    172
    173
    174
    175
    176
    177
    178
    179
    180
    181
    182
    183
    184
    185
    186
    187
    188
    189
    190
    191
    192
    193
    194
    195
    196
    197
    198
    199
    200
    201
    202
    203
    204
    205
    206
    207
    208
    209
    210
    211
    212
    213
    214
    215
    216
    217
    218
    219
    220
    221
    222
    223
    224
    225
    226
    227
    228
    229
    230
    231
    232
    233
    234
    235
    236
    237
    238
    239
    240
    241
    242
    243
    244
    245
    246
    247
    248
    249
    250
    251
    252
    253
    254
    255
    256
    257
    258
    259
    260
    261
    262
    263
    264
    265
    # coding: utf-8
    
    from huaweicloudsdkcore.auth.credentials import BasicCredentials
    from huaweicloudsdkmrs.v1.region.mrs_region import MrsRegion
    from huaweicloudsdkcore.exceptions import exceptions
    from huaweicloudsdkmrs.v1 import *
    
    if __name__ == "__main__":
        # The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
        # In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
        ak = os.getenv("CLOUD_SDK_AK")
        sk = os.getenv("CLOUD_SDK_SK")
    
        credentials = BasicCredentials(ak, sk) \
    
        client = MrsClient.new_builder() \
            .with_credentials(credentials) \
            .with_region(MrsRegion.value_of("<YOUR REGION>")) \
            .build()
    
        try:
            request = CreateClusterRequest()
            listNodesExecScripts = [
                "master",
                "core",
                "task"
            ]
            listNodesExecScripts1 = [
                "master",
                "core",
                "task"
            ]
            listExecScriptsAutoScalingPolicy = [
                ScaleScript(
                    name="before_scale_out",
                    uri="s3a://XXX/zeppelin_install.sh",
                    parameters="${mrs_scale_node_num} ${mrs_scale_type} xxx",
                    nodes=listNodesExecScripts1,
                    active_master=True,
                    fail_action="continue",
                    action_stage="before_scale_out"
                ),
                ScaleScript(
                    name="after_scale_out",
                    uri="s3a://XXX/storm_rebalance.sh",
                    parameters="${mrs_scale_node_hostnames} ${mrs_scale_node_ips}",
                    nodes=listNodesExecScripts,
                    active_master=True,
                    fail_action="continue",
                    action_stage="after_scale_out"
                )
            ]
            triggerRules = Trigger(
                metric_name="YARNMemoryAvailablePercentage",
                metric_value="70",
                comparison_operator="GT",
                evaluation_periods=10
            )
            triggerRules1 = Trigger(
                metric_name="YARNMemoryAvailablePercentage",
                metric_value="25",
                comparison_operator="LT",
                evaluation_periods=10
            )
            listRulesAutoScalingPolicy = [
                Rule(
                    name="default-expand-1",
                    adjustment_type="scale_out",
                    cool_down_minutes=5,
                    scaling_adjustment=1,
                    trigger=triggerRules1
                ),
                Rule(
                    name="default-shrink-1",
                    adjustment_type="scale_in",
                    cool_down_minutes=5,
                    scaling_adjustment=1,
                    trigger=triggerRules
                )
            ]
            listResourcesPlansAutoScalingPolicy = [
                ResourcesPlan(
                    period_type="daily",
                    start_time="9:50",
                    end_time="10:20",
                    min_capacity=2,
                    max_capacity=3
                ),
                ResourcesPlan(
                    period_type="daily",
                    start_time="10:20",
                    end_time="12:30",
                    min_capacity=0,
                    max_capacity=2
                )
            ]
            autoScalingPolicyNodeGroups = AutoScalingPolicy(
                auto_scaling_enable=True,
                min_capacity=1,
                max_capacity=3,
                resources_plans=listResourcesPlansAutoScalingPolicy,
                rules=listRulesAutoScalingPolicy,
                exec_scripts=listExecScriptsAutoScalingPolicy
            )
            listNodeGroupsbody = [
                NodeGroupV11(
                    group_name="master_node_default_group",
                    node_num=2,
                    node_size="s3.xlarge.2.linux.bigdata",
                    root_volume_size="480",
                    root_volume_type="SATA",
                    data_volume_type="SATA",
                    data_volume_count=1,
                    data_volume_size=600
                ),
                NodeGroupV11(
                    group_name="core_node_analysis_group",
                    node_num=3,
                    node_size="s3.xlarge.2.linux.bigdata",
                    root_volume_size="480",
                    root_volume_type="SATA",
                    data_volume_type="SATA",
                    data_volume_count=1,
                    data_volume_size=600
                ),
                NodeGroupV11(
                    group_name="task_node_analysis_group",
                    node_num=2,
                    node_size="s3.xlarge.2.linux.bigdata",
                    root_volume_size="480",
                    root_volume_type="SATA",
                    data_volume_type="SATA",
                    data_volume_count=0,
                    data_volume_size=600,
                    auto_scaling_policy=autoScalingPolicyNodeGroups
                )
            ]
            listTagsbody = [
                Tag(
                    key="key1",
                    value="value1"
                ),
                Tag(
                    key="key2",
                    value="value2"
                )
            ]
            listActionStagesBootstrapScripts = [
                "AFTER_SCALE_IN",
                "AFTER_SCALE_OUT"
            ]
            listNodesBootstrapScripts = [
                "master"
            ]
            listActionStagesBootstrapScripts1 = [
                "BEFORE_COMPONENT_FIRST_START",
                "BEFORE_SCALE_IN"
            ]
            listNodesBootstrapScripts1 = [
                "master",
                "core",
                "task"
            ]
            listBootstrapScriptsbody = [
                BootstrapScript(
                    name="Modify os config",
                    uri="s3a://XXX/modify_os_config.sh",
                    parameters="param1 param2",
                    nodes=listNodesBootstrapScripts1,
                    active_master=False,
                    fail_action="continue",
                    before_component_start=True,
                    start_time=1667892101,
                    state="IN_PROGRESS",
                    action_stages=listActionStagesBootstrapScripts1
                ),
                BootstrapScript(
                    name="Install zepplin",
                    uri="s3a://XXX/zeppelin_install.sh",
                    parameters="",
                    nodes=listNodesBootstrapScripts,
                    active_master=True,
                    fail_action="continue",
                    before_component_start=False,
                    start_time=1667892101,
                    state="IN_PROGRESS",
                    action_stages=listActionStagesBootstrapScripts
                )
            ]
            listAddJobsbody = [
                AddJobsReqV11(
                    job_type=1,
                    job_name="tenji111",
                    jar_path="s3a://bigdata/program/hadoop-mapreduce-examples-2.7.2.jar",
                    arguments="wordcount",
                    input="s3a://bigdata/input/wd_1k/",
                    output="s3a://bigdata/ouput/",
                    job_log="s3a://bigdata/log/",
                    hive_script_path="",
                    hql="",
                    shutdown_cluster=True,
                    submit_job_once_cluster_run=True,
                    file_action=""
                )
            ]
            listComponentListbody = [
                ComponentAmbV11(
                    component_name="Hadoop"
                ),
                ComponentAmbV11(
                    component_name="Spark"
                ),
                ComponentAmbV11(
                    component_name="HBase"
                ),
                ComponentAmbV11(
                    component_name="Hive"
                ),
                ComponentAmbV11(
                    component_name="Presto"
                ),
                ComponentAmbV11(
                    component_name="Tez"
                ),
                ComponentAmbV11(
                    component_name="Hue"
                ),
                ComponentAmbV11(
                    component_name="Loader"
                ),
                ComponentAmbV11(
                    component_name="Flink"
                )
            ]
            request.body = CreateClusterReqV11(
                node_groups=listNodeGroupsbody,
                login_mode=1,
                tags=listTagsbody,
                enterprise_project_id="0",
                log_collection=1,
                cluster_type=0,
                safe_mode=0,
                cluster_master_secret="",
                cluster_admin_secret="",
                bootstrap_scripts=listBootstrapScriptsbody,
                add_jobs=listAddJobsbody,
                security_groups_id="4820eace-66ad-4f2c-8d46-cf340e3029dd",
                subnet_name="subnet-4b44",
                subnet_id="67984709-e15e-4e86-9886-d76712d4e00a",
                vpc_id="4a365717-67be-4f33-80c5-98e98a813af8",
                available_zone_id="d573142f24894ef3bd3664de068b44b0",
                component_list=listComponentListbody,
                vpc="vpc-4b1c",
                data_center="",
                billing_type=12,
                cluster_name="mrs_HEbK",
                cluster_version="MRS 3.1.0"
            )
            response = client.create_cluster(request)
            print(response)
        except exceptions.ClientRequestException as e:
            print(e.status_code)
            print(e.request_id)
            print(e.error_code)
            print(e.error_msg)
    
  • 不使用node_groups参数组,创建一个启用“集群高可用”功能的集群,集群版本号为MRS 3.1.0。

      1
      2
      3
      4
      5
      6
      7
      8
      9
     10
     11
     12
     13
     14
     15
     16
     17
     18
     19
     20
     21
     22
     23
     24
     25
     26
     27
     28
     29
     30
     31
     32
     33
     34
     35
     36
     37
     38
     39
     40
     41
     42
     43
     44
     45
     46
     47
     48
     49
     50
     51
     52
     53
     54
     55
     56
     57
     58
     59
     60
     61
     62
     63
     64
     65
     66
     67
     68
     69
     70
     71
     72
     73
     74
     75
     76
     77
     78
     79
     80
     81
     82
     83
     84
     85
     86
     87
     88
     89
     90
     91
     92
     93
     94
     95
     96
     97
     98
     99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    159
    160
    161
    162
    163
    164
    165
    166
    167
    168
    169
    170
    171
    172
    173
    174
    175
    176
    177
    178
    179
    180
    181
    182
    183
    184
    185
    186
    187
    188
    189
    190
    191
    192
    193
    194
    195
    196
    197
    198
    199
    200
    201
    202
    203
    204
    205
    206
    207
    208
    209
    210
    211
    212
    213
    214
    215
    216
    217
    218
    219
    220
    221
    222
    223
    224
    225
    226
    227
    228
    229
    230
    231
    232
    233
    234
    # coding: utf-8
    
    from huaweicloudsdkcore.auth.credentials import BasicCredentials
    from huaweicloudsdkmrs.v1.region.mrs_region import MrsRegion
    from huaweicloudsdkcore.exceptions import exceptions
    from huaweicloudsdkmrs.v1 import *
    
    if __name__ == "__main__":
        # The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
        # In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
        ak = os.getenv("CLOUD_SDK_AK")
        sk = os.getenv("CLOUD_SDK_SK")
    
        credentials = BasicCredentials(ak, sk) \
    
        client = MrsClient.new_builder() \
            .with_credentials(credentials) \
            .with_region(MrsRegion.value_of("<YOUR REGION>")) \
            .build()
    
        try:
            request = CreateClusterRequest()
            listTagsbody = [
                Tag(
                    key="key1",
                    value="value1"
                ),
                Tag(
                    key="key2",
                    value="value2"
                )
            ]
            listActionStagesBootstrapScripts = [
                "AFTER_SCALE_IN",
                "AFTER_SCALE_OUT"
            ]
            listNodesBootstrapScripts = [
                "master"
            ]
            listActionStagesBootstrapScripts1 = [
                "BEFORE_COMPONENT_FIRST_START",
                "BEFORE_SCALE_IN"
            ]
            listNodesBootstrapScripts1 = [
                "master",
                "core",
                "task"
            ]
            listBootstrapScriptsbody = [
                BootstrapScript(
                    name="Modifyosconfig",
                    uri="s3a: //XXX/modify_os_config.sh",
                    parameters="param1param2",
                    nodes=listNodesBootstrapScripts1,
                    active_master=False,
                    fail_action="continue",
                    before_component_start=True,
                    start_time=1667892101,
                    state="IN_PROGRESS",
                    action_stages=listActionStagesBootstrapScripts1
                ),
                BootstrapScript(
                    name="Installzepplin",
                    uri="s3a: //XXX/zeppelin_install.sh",
                    parameters="",
                    nodes=listNodesBootstrapScripts,
                    active_master=True,
                    fail_action="continue",
                    before_component_start=False,
                    start_time=1667892101,
                    state="IN_PROGRESS",
                    action_stages=listActionStagesBootstrapScripts
                )
            ]
            listNodesExecScripts = [
                "master",
                "core",
                "task"
            ]
            listNodesExecScripts1 = [
                "master",
                "core",
                "task"
            ]
            listExecScriptsAutoScalingPolicy = [
                ScaleScript(
                    name="before_scale_out",
                    uri="s3a: //XXX/zeppelin_install.sh",
                    parameters="${mrs_scale_node_num}${mrs_scale_type}xxx",
                    nodes=listNodesExecScripts1,
                    active_master=True,
                    fail_action="continue",
                    action_stage="before_scale_out"
                ),
                ScaleScript(
                    name="after_scale_out",
                    uri="s3a: //XXX/storm_rebalance.sh",
                    parameters="${mrs_scale_node_hostnames}${mrs_scale_node_ips}",
                    nodes=listNodesExecScripts,
                    active_master=True,
                    fail_action="continue",
                    action_stage="after_scale_out"
                )
            ]
            triggerRules = Trigger(
                metric_name="YARNMemoryAvailablePercentage",
                metric_value="70",
                comparison_operator="GT",
                evaluation_periods=10
            )
            triggerRules1 = Trigger(
                metric_name="YARNMemoryAvailablePercentage",
                metric_value="25",
                comparison_operator="LT",
                evaluation_periods=10
            )
            listRulesAutoScalingPolicy = [
                Rule(
                    name="default-expand-1",
                    adjustment_type="scale_out",
                    cool_down_minutes=5,
                    scaling_adjustment=1,
                    trigger=triggerRules1
                ),
                Rule(
                    name="default-shrink-1",
                    adjustment_type="scale_in",
                    cool_down_minutes=5,
                    scaling_adjustment=1,
                    trigger=triggerRules
                )
            ]
            listResourcesPlansAutoScalingPolicy = [
                ResourcesPlan(
                    period_type="daily",
                    start_time="9: 50",
                    end_time="10: 20",
                    min_capacity=2,
                    max_capacity=3
                ),
                ResourcesPlan(
                    period_type="daily",
                    start_time="10: 20",
                    end_time="12: 30",
                    min_capacity=0,
                    max_capacity=2
                )
            ]
            autoScalingPolicyTaskNodeGroups = AutoScalingPolicy(
                auto_scaling_enable=True,
                min_capacity=1,
                max_capacity=3,
                resources_plans=listResourcesPlansAutoScalingPolicy,
                rules=listRulesAutoScalingPolicy,
                exec_scripts=listExecScriptsAutoScalingPolicy
            )
            listTaskNodeGroupsbody = [
                TaskNodeGroup(
                    node_num=2,
                    node_size="s3.xlarge.2.linux.bigdata",
                    data_volume_type="SATA",
                    data_volume_count=1,
                    data_volume_size=600,
                    auto_scaling_policy=autoScalingPolicyTaskNodeGroups
                )
            ]
            listAddJobsbody = [
                AddJobsReqV11(
                    job_type=1,
                    job_name="tenji111",
                    jar_path="s3a: //bigdata/program/hadoop-mapreduce-examples-2.7.2.jar",
                    arguments="wordcount",
                    input="s3a: //bigdata/input/wd_1k/",
                    output="s3a: //bigdata/ouput/",
                    job_log="s3a: //bigdata/log/",
                    hive_script_path="",
                    hql="",
                    shutdown_cluster=True,
                    submit_job_once_cluster_run=True,
                    file_action=""
                )
            ]
            listComponentListbody = [
                ComponentAmbV11(
                    component_name="Hadoop"
                ),
                ComponentAmbV11(
                    component_name="Spark"
                ),
                ComponentAmbV11(
                    component_name="HBase"
                ),
                ComponentAmbV11(
                    component_name="Hive"
                )
            ]
            request.body = CreateClusterReqV11(
                tags=listTagsbody,
                log_collection=1,
                cluster_type=0,
                safe_mode=0,
                node_public_cert_name="SSHkey-bba1",
                bootstrap_scripts=listBootstrapScriptsbody,
                task_node_groups=listTaskNodeGroupsbody,
                core_data_volume_count=2,
                core_data_volume_size=600,
                core_data_volume_type="SATA",
                master_data_volume_count=1,
                master_data_volume_size=600,
                master_data_volume_type="SATA",
                add_jobs=listAddJobsbody,
                security_groups_id="845bece1-fd22-4b45-7a6e-14338c99ee43",
                subnet_name="subnet",
                subnet_id="815bece0-fd22-4b65-8a6e-15788c99ee43",
                vpc_id="5b7db34d-3534-4a6e-ac94-023cd36aaf74",
                available_zone_id="d573142f24894ef3bd3664de068b44b0",
                component_list=listComponentListbody,
                core_node_size="s1.xlarge.linux.bigdata",
                master_node_size="s3.2xlarge.2.linux.bigdata",
                vpc="vpc1",
                data_center="",
                billing_type=12,
                core_node_num=3,
                master_node_num=2,
                cluster_name="newcluster",
                cluster_version="MRS 3.1.0"
            )
            response = client.create_cluster(request)
            print(response)
        except exceptions.ClientRequestException as e:
            print(e.status_code)
            print(e.request_id)
            print(e.error_code)
            print(e.error_msg)
    
  • 使用node_groups参数组,创建一个关闭“集群高可用”功能、最小规格的集群,集群版本号为MRS 3.1.0。

      1
      2
      3
      4
      5
      6
      7
      8
      9
     10
     11
     12
     13
     14
     15
     16
     17
     18
     19
     20
     21
     22
     23
     24
     25
     26
     27
     28
     29
     30
     31
     32
     33
     34
     35
     36
     37
     38
     39
     40
     41
     42
     43
     44
     45
     46
     47
     48
     49
     50
     51
     52
     53
     54
     55
     56
     57
     58
     59
     60
     61
     62
     63
     64
     65
     66
     67
     68
     69
     70
     71
     72
     73
     74
     75
     76
     77
     78
     79
     80
     81
     82
     83
     84
     85
     86
     87
     88
     89
     90
     91
     92
     93
     94
     95
     96
     97
     98
     99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    159
    160
    161
    162
    163
    164
    165
    166
    167
    168
    169
    170
    171
    172
    # coding: utf-8
    
    from huaweicloudsdkcore.auth.credentials import BasicCredentials
    from huaweicloudsdkmrs.v1.region.mrs_region import MrsRegion
    from huaweicloudsdkcore.exceptions import exceptions
    from huaweicloudsdkmrs.v1 import *
    
    if __name__ == "__main__":
        # The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
        # In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
        ak = os.getenv("CLOUD_SDK_AK")
        sk = os.getenv("CLOUD_SDK_SK")
    
        credentials = BasicCredentials(ak, sk) \
    
        client = MrsClient.new_builder() \
            .with_credentials(credentials) \
            .with_region(MrsRegion.value_of("<YOUR REGION>")) \
            .build()
    
        try:
            request = CreateClusterRequest()
            listNodeGroupsbody = [
                NodeGroupV11(
                    group_name="master_node_default_group",
                    node_num=1,
                    node_size="s3.xlarge.2.linux.bigdata",
                    root_volume_size="480",
                    root_volume_type="SATA",
                    data_volume_type="SATA",
                    data_volume_count=1,
                    data_volume_size=600
                ),
                NodeGroupV11(
                    group_name="core_node_analysis_group",
                    node_num=1,
                    node_size="s3.xlarge.2.linux.bigdata",
                    root_volume_size="480",
                    root_volume_type="SATA",
                    data_volume_type="SATA",
                    data_volume_count=1,
                    data_volume_size=600
                )
            ]
            listTagsbody = [
                Tag(
                    key="key1",
                    value="value1"
                ),
                Tag(
                    key="key2",
                    value="value2"
                )
            ]
            listActionStagesBootstrapScripts = [
                "AFTER_SCALE_IN",
                "AFTER_SCALE_OUT"
            ]
            listNodesBootstrapScripts = [
                "master"
            ]
            listActionStagesBootstrapScripts1 = [
                "BEFORE_COMPONENT_FIRST_START",
                "BEFORE_SCALE_IN"
            ]
            listNodesBootstrapScripts1 = [
                "master",
                "core",
                "task"
            ]
            listBootstrapScriptsbody = [
                BootstrapScript(
                    name="Modify os config",
                    uri="s3a://XXX/modify_os_config.sh",
                    parameters="param1 param2",
                    nodes=listNodesBootstrapScripts1,
                    active_master=False,
                    fail_action="continue",
                    before_component_start=True,
                    start_time=1667892101,
                    state="IN_PROGRESS",
                    action_stages=listActionStagesBootstrapScripts1
                ),
                BootstrapScript(
                    name="Install zepplin",
                    uri="s3a://XXX/zeppelin_install.sh",
                    parameters="",
                    nodes=listNodesBootstrapScripts,
                    active_master=True,
                    fail_action="continue",
                    before_component_start=False,
                    start_time=1667892101,
                    state="IN_PROGRESS",
                    action_stages=listActionStagesBootstrapScripts
                )
            ]
            listAddJobsbody = [
                AddJobsReqV11(
                    job_type=1,
                    job_name="tenji111",
                    jar_path="s3a://bigdata/program/hadoop-mapreduce-examples-2.7.2.jar",
                    arguments="wordcount",
                    input="s3a://bigdata/input/wd_1k/",
                    output="s3a://bigdata/ouput/",
                    job_log="s3a://bigdata/log/",
                    hive_script_path="",
                    hql="",
                    shutdown_cluster=True,
                    submit_job_once_cluster_run=True,
                    file_action=""
                )
            ]
            listComponentListbody = [
                ComponentAmbV11(
                    component_name="Hadoop"
                ),
                ComponentAmbV11(
                    component_name="Spark"
                ),
                ComponentAmbV11(
                    component_name="HBase"
                ),
                ComponentAmbV11(
                    component_name="Hive"
                ),
                ComponentAmbV11(
                    component_name="Presto"
                ),
                ComponentAmbV11(
                    component_name="Tez"
                ),
                ComponentAmbV11(
                    component_name="Hue"
                ),
                ComponentAmbV11(
                    component_name="Loader"
                ),
                ComponentAmbV11(
                    component_name="Flink"
                )
            ]
            request.body = CreateClusterReqV11(
                node_groups=listNodeGroupsbody,
                login_mode=1,
                tags=listTagsbody,
                enterprise_project_id="0",
                log_collection=1,
                cluster_type=0,
                safe_mode=0,
                cluster_master_secret="",
                cluster_admin_secret="",
                bootstrap_scripts=listBootstrapScriptsbody,
                add_jobs=listAddJobsbody,
                security_groups_id="4820eace-66ad-4f2c-8d46-cf340e3029dd",
                subnet_name="subnet-4b44",
                subnet_id="67984709-e15e-4e86-9886-d76712d4e00a",
                vpc_id="4a365717-67be-4f33-80c5-98e98a813af8",
                available_zone_id="d573142f24894ef3bd3664de068b44b0",
                component_list=listComponentListbody,
                vpc="vpc-4b1c",
                data_center="",
                billing_type=12,
                cluster_name="mrs_HEbK",
                cluster_version="MRS 3.1.0"
            )
            response = client.create_cluster(request)
            print(response)
        except exceptions.ClientRequestException as e:
            print(e.status_code)
            print(e.request_id)
            print(e.error_code)
            print(e.error_msg)
    
  • 不使用node_groups参数组,创建一个关闭“集群高可用”功能、最小规格的集群,集群版本号为MRS 3.1.0。

      1
      2
      3
      4
      5
      6
      7
      8
      9
     10
     11
     12
     13
     14
     15
     16
     17
     18
     19
     20
     21
     22
     23
     24
     25
     26
     27
     28
     29
     30
     31
     32
     33
     34
     35
     36
     37
     38
     39
     40
     41
     42
     43
     44
     45
     46
     47
     48
     49
     50
     51
     52
     53
     54
     55
     56
     57
     58
     59
     60
     61
     62
     63
     64
     65
     66
     67
     68
     69
     70
     71
     72
     73
     74
     75
     76
     77
     78
     79
     80
     81
     82
     83
     84
     85
     86
     87
     88
     89
     90
     91
     92
     93
     94
     95
     96
     97
     98
     99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    # coding: utf-8
    
    from huaweicloudsdkcore.auth.credentials import BasicCredentials
    from huaweicloudsdkmrs.v1.region.mrs_region import MrsRegion
    from huaweicloudsdkcore.exceptions import exceptions
    from huaweicloudsdkmrs.v1 import *
    
    if __name__ == "__main__":
        # The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
        # In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
        ak = os.getenv("CLOUD_SDK_AK")
        sk = os.getenv("CLOUD_SDK_SK")
    
        credentials = BasicCredentials(ak, sk) \
    
        client = MrsClient.new_builder() \
            .with_credentials(credentials) \
            .with_region(MrsRegion.value_of("<YOUR REGION>")) \
            .build()
    
        try:
            request = CreateClusterRequest()
            listTagsbody = [
                Tag(
                    key="key1",
                    value="value1"
                ),
                Tag(
                    key="key2",
                    value="value2"
                )
            ]
            listActionStagesBootstrapScripts = [
                "AFTER_SCALE_IN",
                "AFTER_SCALE_OUT"
            ]
            listNodesBootstrapScripts = [
                "master"
            ]
            listBootstrapScriptsbody = [
                BootstrapScript(
                    name="Install zepplin",
                    uri="s3a://XXX/zeppelin_install.sh",
                    parameters="",
                    nodes=listNodesBootstrapScripts,
                    active_master=False,
                    fail_action="continue",
                    before_component_start=False,
                    start_time=1667892101,
                    state="IN_PROGRESS",
                    action_stages=listActionStagesBootstrapScripts
                )
            ]
            listAddJobsbody = [
                AddJobsReqV11(
                    job_type=1,
                    job_name="tenji111",
                    jar_path="s3a://bigdata/program/hadoop-mapreduce-examples-XXX.jar",
                    arguments="wordcount",
                    input="s3a://bigdata/input/wd_1k/",
                    output="s3a://bigdata/ouput/",
                    job_log="s3a://bigdata/log/",
                    hive_script_path="",
                    hql="",
                    shutdown_cluster=False,
                    submit_job_once_cluster_run=True,
                    file_action=""
                )
            ]
            listComponentListbody = [
                ComponentAmbV11(
                    component_name="Hadoop"
                ),
                ComponentAmbV11(
                    component_name="Spark"
                ),
                ComponentAmbV11(
                    component_name="HBase"
                ),
                ComponentAmbV11(
                    component_name="Hive"
                ),
                ComponentAmbV11(
                    component_name="Presto"
                ),
                ComponentAmbV11(
                    component_name="Tez"
                ),
                ComponentAmbV11(
                    component_name="Hue"
                ),
                ComponentAmbV11(
                    component_name="Loader"
                ),
                ComponentAmbV11(
                    component_name="Flink"
                )
            ]
            request.body = CreateClusterReqV11(
                login_mode=1,
                tags=listTagsbody,
                enterprise_project_id="0",
                log_collection=1,
                cluster_type=0,
                safe_mode=0,
                cluster_admin_secret="******",
                node_public_cert_name="SSHkey-bba1",
                bootstrap_scripts=listBootstrapScriptsbody,
                core_data_volume_count=1,
                core_data_volume_size=600,
                core_data_volume_type="SATA",
                master_data_volume_count=1,
                master_data_volume_size=600,
                master_data_volume_type="SATA",
                add_jobs=listAddJobsbody,
                security_groups_id="",
                subnet_name="subnet",
                subnet_id="815bece0-fd22-4b65-8a6e-15788c99ee43",
                vpc_id="5b7db34d-3534-4a6e-ac94-023cd36aaf74",
                available_zone_id="d573142f24894ef3bd3664de068b44b0",
                component_list=listComponentListbody,
                core_node_size="s1.xlarge.linux.bigdata",
                master_node_size="s3.2xlarge.2.linux.bigdata",
                vpc="vpc1",
                data_center="",
                billing_type=12,
                core_node_num=1,
                master_node_num=1,
                cluster_name="newcluster",
                cluster_version="MRS 3.1.0"
            )
            response = client.create_cluster(request)
            print(response)
        except exceptions.ClientRequestException as e:
            print(e.status_code)
            print(e.request_id)
            print(e.error_code)
            print(e.error_msg)
    
  • 使用node_groups参数组,创建一个启用“集群高可用”功能的集群,集群版本号为MRS 3.1.0。

      1
      2
      3
      4
      5
      6
      7
      8
      9
     10
     11
     12
     13
     14
     15
     16
     17
     18
     19
     20
     21
     22
     23
     24
     25
     26
     27
     28
     29
     30
     31
     32
     33
     34
     35
     36
     37
     38
     39
     40
     41
     42
     43
     44
     45
     46
     47
     48
     49
     50
     51
     52
     53
     54
     55
     56
     57
     58
     59
     60
     61
     62
     63
     64
     65
     66
     67
     68
     69
     70
     71
     72
     73
     74
     75
     76
     77
     78
     79
     80
     81
     82
     83
     84
     85
     86
     87
     88
     89
     90
     91
     92
     93
     94
     95
     96
     97
     98
     99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    159
    160
    161
    162
    163
    164
    165
    166
    167
    168
    169
    170
    171
    172
    173
    174
    175
    176
    177
    178
    179
    180
    181
    182
    183
    184
    185
    186
    187
    188
    189
    190
    191
    192
    193
    194
    195
    196
    197
    198
    199
    200
    201
    202
    203
    204
    205
    206
    207
    208
    209
    210
    211
    212
    213
    214
    215
    216
    217
    218
    219
    220
    221
    222
    223
    224
    225
    226
    227
    228
    229
    230
    231
    232
    233
    234
    235
    236
    237
    238
    239
    240
    241
    242
    243
    244
    245
    246
    247
    248
    249
    250
    251
    252
    253
    254
    255
    256
    257
    258
    259
    260
    261
    262
    263
    264
    265
    266
    267
    268
    269
    270
    271
    272
    273
    274
    275
    276
    277
    278
    279
    280
    281
    282
    283
    284
    285
    286
    287
    288
    289
    290
    291
    292
    293
    294
    295
    296
    297
    298
    299
    300
    301
    302
    303
    304
    305
    306
    307
    308
    309
    310
    311
    312
    313
    314
    315
    316
    317
    318
    package main
    
    import (
    	"fmt"
    	"github.com/huaweicloud/huaweicloud-sdk-go-v3/core/auth/basic"
        mrs "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/mrs/v1"
    	"github.com/huaweicloud/huaweicloud-sdk-go-v3/services/mrs/v1/model"
        region "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/mrs/v1/region"
    )
    
    func main() {
        // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
        // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
        ak := os.Getenv("CLOUD_SDK_AK")
        sk := os.Getenv("CLOUD_SDK_SK")
    
        auth := basic.NewCredentialsBuilder().
            WithAk(ak).
            WithSk(sk).
            Build()
    
        client := mrs.NewMrsClient(
            mrs.MrsClientBuilder().
                WithRegion(region.ValueOf("<YOUR REGION>")).
                WithCredential(auth).
                Build())
    
        request := &model.CreateClusterRequest{}
    	var listNodesExecScripts = []string{
            "master",
    	    "core",
    	    "task",
        }
    	var listNodesExecScripts1 = []string{
            "master",
    	    "core",
    	    "task",
        }
    	parametersExecScripts:= "${mrs_scale_node_num} ${mrs_scale_type} xxx"
    	activeMasterExecScripts:= true
    	parametersExecScripts1:= "${mrs_scale_node_hostnames} ${mrs_scale_node_ips}"
    	activeMasterExecScripts1:= true
    	var listExecScriptsAutoScalingPolicy = []model.ScaleScript{
            {
                Name: "before_scale_out",
                Uri: "s3a://XXX/zeppelin_install.sh",
                Parameters: &parametersExecScripts,
                Nodes: listNodesExecScripts1,
                ActiveMaster: &activeMasterExecScripts,
                FailAction: model.GetScaleScriptFailActionEnum().CONTINUE,
                ActionStage: model.GetScaleScriptActionStageEnum().BEFORE_SCALE_OUT,
            },
            {
                Name: "after_scale_out",
                Uri: "s3a://XXX/storm_rebalance.sh",
                Parameters: &parametersExecScripts1,
                Nodes: listNodesExecScripts,
                ActiveMaster: &activeMasterExecScripts1,
                FailAction: model.GetScaleScriptFailActionEnum().CONTINUE,
                ActionStage: model.GetScaleScriptActionStageEnum().AFTER_SCALE_OUT,
            },
        }
    	comparisonOperatorTrigger:= "GT"
    	triggerRules := &model.Trigger{
    		MetricName: "YARNMemoryAvailablePercentage",
    		MetricValue: "70",
    		ComparisonOperator: &comparisonOperatorTrigger,
    		EvaluationPeriods: int32(10),
    	}
    	comparisonOperatorTrigger1:= "LT"
    	triggerRules1 := &model.Trigger{
    		MetricName: "YARNMemoryAvailablePercentage",
    		MetricValue: "25",
    		ComparisonOperator: &comparisonOperatorTrigger1,
    		EvaluationPeriods: int32(10),
    	}
    	var listRulesAutoScalingPolicy = []model.Rule{
            {
                Name: "default-expand-1",
                AdjustmentType: model.GetRuleAdjustmentTypeEnum().SCALE_OUT,
                CoolDownMinutes: int32(5),
                ScalingAdjustment: int32(1),
                Trigger: triggerRules1,
            },
            {
                Name: "default-shrink-1",
                AdjustmentType: model.GetRuleAdjustmentTypeEnum().SCALE_IN,
                CoolDownMinutes: int32(5),
                ScalingAdjustment: int32(1),
                Trigger: triggerRules,
            },
        }
    	var listResourcesPlansAutoScalingPolicy = []model.ResourcesPlan{
            {
                PeriodType: "daily",
                StartTime: "9:50",
                EndTime: "10:20",
                MinCapacity: int32(2),
                MaxCapacity: int32(3),
            },
            {
                PeriodType: "daily",
                StartTime: "10:20",
                EndTime: "12:30",
                MinCapacity: int32(0),
                MaxCapacity: int32(2),
            },
        }
    	autoScalingPolicyNodeGroups := &model.AutoScalingPolicy{
    		AutoScalingEnable: true,
    		MinCapacity: int32(1),
    		MaxCapacity: int32(3),
    		ResourcesPlans: &listResourcesPlansAutoScalingPolicy,
    		Rules: &listRulesAutoScalingPolicy,
    		ExecScripts: &listExecScriptsAutoScalingPolicy,
    	}
    	rootVolumeSizeNodeGroups:= "480"
    	rootVolumeTypeNodeGroups:= "SATA"
    	dataVolumeTypeNodeGroups:= "SATA"
    	dataVolumeCountNodeGroups:= int32(1)
    	dataVolumeSizeNodeGroups:= int32(600)
    	rootVolumeSizeNodeGroups1:= "480"
    	rootVolumeTypeNodeGroups1:= "SATA"
    	dataVolumeTypeNodeGroups1:= "SATA"
    	dataVolumeCountNodeGroups1:= int32(1)
    	dataVolumeSizeNodeGroups1:= int32(600)
    	rootVolumeSizeNodeGroups2:= "480"
    	rootVolumeTypeNodeGroups2:= "SATA"
    	dataVolumeTypeNodeGroups2:= "SATA"
    	dataVolumeCountNodeGroups2:= int32(0)
    	dataVolumeSizeNodeGroups2:= int32(600)
    	var listNodeGroupsbody = []model.NodeGroupV11{
            {
                GroupName: "master_node_default_group",
                NodeNum: int32(2),
                NodeSize: "s3.xlarge.2.linux.bigdata",
                RootVolumeSize: &rootVolumeSizeNodeGroups,
                RootVolumeType: &rootVolumeTypeNodeGroups,
                DataVolumeType: &dataVolumeTypeNodeGroups,
                DataVolumeCount: &dataVolumeCountNodeGroups,
                DataVolumeSize: &dataVolumeSizeNodeGroups,
            },
            {
                GroupName: "core_node_analysis_group",
                NodeNum: int32(3),
                NodeSize: "s3.xlarge.2.linux.bigdata",
                RootVolumeSize: &rootVolumeSizeNodeGroups1,
                RootVolumeType: &rootVolumeTypeNodeGroups1,
                DataVolumeType: &dataVolumeTypeNodeGroups1,
                DataVolumeCount: &dataVolumeCountNodeGroups1,
                DataVolumeSize: &dataVolumeSizeNodeGroups1,
            },
            {
                GroupName: "task_node_analysis_group",
                NodeNum: int32(2),
                NodeSize: "s3.xlarge.2.linux.bigdata",
                RootVolumeSize: &rootVolumeSizeNodeGroups2,
                RootVolumeType: &rootVolumeTypeNodeGroups2,
                DataVolumeType: &dataVolumeTypeNodeGroups2,
                DataVolumeCount: &dataVolumeCountNodeGroups2,
                DataVolumeSize: &dataVolumeSizeNodeGroups2,
                AutoScalingPolicy: autoScalingPolicyNodeGroups,
            },
        }
    	var listTagsbody = []model.Tag{
            {
                Key: "key1",
                Value: "value1",
            },
            {
                Key: "key2",
                Value: "value2",
            },
        }
    	var listActionStagesBootstrapScripts = []model.BootstrapScriptActionStages{
            model.GetBootstrapScriptActionStagesEnum().AFTER_SCALE_IN,
    	    model.GetBootstrapScriptActionStagesEnum().AFTER_SCALE_OUT,
        }
    	var listNodesBootstrapScripts = []string{
            "master",
        }
    	var listActionStagesBootstrapScripts1 = []model.BootstrapScriptActionStages{
            model.GetBootstrapScriptActionStagesEnum().BEFORE_COMPONENT_FIRST_START,
    	    model.GetBootstrapScriptActionStagesEnum().BEFORE_SCALE_IN,
        }
    	var listNodesBootstrapScripts1 = []string{
            "master",
    	    "core",
    	    "task",
        }
    	parametersBootstrapScripts:= "param1 param2"
    	activeMasterBootstrapScripts:= false
    	beforeComponentStartBootstrapScripts:= true
    	startTimeBootstrapScripts:= int64(1667892101)
    	stateBootstrapScripts:= model.GetBootstrapScriptStateEnum().IN_PROGRESS
    	parametersBootstrapScripts1:= ""
    	activeMasterBootstrapScripts1:= true
    	beforeComponentStartBootstrapScripts1:= false
    	startTimeBootstrapScripts1:= int64(1667892101)
    	stateBootstrapScripts1:= model.GetBootstrapScriptStateEnum().IN_PROGRESS
    	var listBootstrapScriptsbody = []model.BootstrapScript{
            {
                Name: "Modify os config",
                Uri: "s3a://XXX/modify_os_config.sh",
                Parameters: &parametersBootstrapScripts,
                Nodes: listNodesBootstrapScripts1,
                ActiveMaster: &activeMasterBootstrapScripts,
                FailAction: model.GetBootstrapScriptFailActionEnum().CONTINUE,
                BeforeComponentStart: &beforeComponentStartBootstrapScripts,
                StartTime: &startTimeBootstrapScripts,
                State: &stateBootstrapScripts,
                ActionStages: &listActionStagesBootstrapScripts1,
            },
            {
                Name: "Install zepplin",
                Uri: "s3a://XXX/zeppelin_install.sh",
                Parameters: &parametersBootstrapScripts1,
                Nodes: listNodesBootstrapScripts,
                ActiveMaster: &activeMasterBootstrapScripts1,
                FailAction: model.GetBootstrapScriptFailActionEnum().CONTINUE,
                BeforeComponentStart: &beforeComponentStartBootstrapScripts1,
                StartTime: &startTimeBootstrapScripts1,
                State: &stateBootstrapScripts1,
                ActionStages: &listActionStagesBootstrapScripts,
            },
        }
    	jarPathAddJobs:= "s3a://bigdata/program/hadoop-mapreduce-examples-2.7.2.jar"
    	argumentsAddJobs:= "wordcount"
    	inputAddJobs:= "s3a://bigdata/input/wd_1k/"
    	outputAddJobs:= "s3a://bigdata/ouput/"
    	jobLogAddJobs:= "s3a://bigdata/log/"
    	hiveScriptPathAddJobs:= ""
    	hqlAddJobs:= ""
    	shutdownClusterAddJobs:= true
    	fileActionAddJobs:= ""
    	var listAddJobsbody = []model.AddJobsReqV11{
            {
                JobType: int32(1),
                JobName: "tenji111",
                JarPath: &jarPathAddJobs,
                Arguments: &argumentsAddJobs,
                Input: &inputAddJobs,
                Output: &outputAddJobs,
                JobLog: &jobLogAddJobs,
                HiveScriptPath: &hiveScriptPathAddJobs,
                Hql: &hqlAddJobs,
                ShutdownCluster: &shutdownClusterAddJobs,
                SubmitJobOnceClusterRun: true,
                FileAction: &fileActionAddJobs,
            },
        }
    	var listComponentListbody = []model.ComponentAmbV11{
            {
                ComponentName: "Hadoop",
            },
            {
                ComponentName: "Spark",
            },
            {
                ComponentName: "HBase",
            },
            {
                ComponentName: "Hive",
            },
            {
                ComponentName: "Presto",
            },
            {
                ComponentName: "Tez",
            },
            {
                ComponentName: "Hue",
            },
            {
                ComponentName: "Loader",
            },
            {
                ComponentName: "Flink",
            },
        }
    	loginModeCreateClusterReqV11:= model.GetCreateClusterReqV11LoginModeEnum().E_1
    	enterpriseProjectIdCreateClusterReqV11:= "0"
    	logCollectionCreateClusterReqV11:= model.GetCreateClusterReqV11LogCollectionEnum().E_1
    	clusterTypeCreateClusterReqV11:= model.GetCreateClusterReqV11ClusterTypeEnum().E_0
    	clusterMasterSecretCreateClusterReqV11:= ""
    	clusterAdminSecretCreateClusterReqV11:= ""
    	securityGroupsIdCreateClusterReqV11:= "4820eace-66ad-4f2c-8d46-cf340e3029dd"
    	request.Body = &model.CreateClusterReqV11{
    		NodeGroups: &listNodeGroupsbody,
    		LoginMode: &loginModeCreateClusterReqV11,
    		Tags: &listTagsbody,
    		EnterpriseProjectId: &enterpriseProjectIdCreateClusterReqV11,
    		LogCollection: &logCollectionCreateClusterReqV11,
    		ClusterType: &clusterTypeCreateClusterReqV11,
    		SafeMode: model.GetCreateClusterReqV11SafeModeEnum().E_0,
    		ClusterMasterSecret: &clusterMasterSecretCreateClusterReqV11,
    		ClusterAdminSecret: &clusterAdminSecretCreateClusterReqV11,
    		BootstrapScripts: &listBootstrapScriptsbody,
    		AddJobs: &listAddJobsbody,
    		SecurityGroupsId: &securityGroupsIdCreateClusterReqV11,
    		SubnetName: "subnet-4b44",
    		SubnetId: "67984709-e15e-4e86-9886-d76712d4e00a",
    		VpcId: "4a365717-67be-4f33-80c5-98e98a813af8",
    		AvailableZoneId: "d573142f24894ef3bd3664de068b44b0",
    		ComponentList: listComponentListbody,
    		Vpc: "vpc-4b1c",
    		DataCenter: "",
    		BillingType: model.GetCreateClusterReqV11BillingTypeEnum().E_12,
    		ClusterName: "mrs_HEbK",
    		ClusterVersion: "MRS 3.1.0",
    	}
    	response, err := client.CreateCluster(request)
    	if err == nil {
            fmt.Printf("%+v\n", response)
        } else {
            fmt.Println(err)
        }
    }
    
  • 不使用node_groups参数组,创建一个启用“集群高可用”功能的集群,集群版本号为MRS 3.1.0。

      1
      2
      3
      4
      5
      6
      7
      8
      9
     10
     11
     12
     13
     14
     15
     16
     17
     18
     19
     20
     21
     22
     23
     24
     25
     26
     27
     28
     29
     30
     31
     32
     33
     34
     35
     36
     37
     38
     39
     40
     41
     42
     43
     44
     45
     46
     47
     48
     49
     50
     51
     52
     53
     54
     55
     56
     57
     58
     59
     60
     61
     62
     63
     64
     65
     66
     67
     68
     69
     70
     71
     72
     73
     74
     75
     76
     77
     78
     79
     80
     81
     82
     83
     84
     85
     86
     87
     88
     89
     90
     91
     92
     93
     94
     95
     96
     97
     98
     99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    159
    160
    161
    162
    163
    164
    165
    166
    167
    168
    169
    170
    171
    172
    173
    174
    175
    176
    177
    178
    179
    180
    181
    182
    183
    184
    185
    186
    187
    188
    189
    190
    191
    192
    193
    194
    195
    196
    197
    198
    199
    200
    201
    202
    203
    204
    205
    206
    207
    208
    209
    210
    211
    212
    213
    214
    215
    216
    217
    218
    219
    220
    221
    222
    223
    224
    225
    226
    227
    228
    229
    230
    231
    232
    233
    234
    235
    236
    237
    238
    239
    240
    241
    242
    243
    244
    245
    246
    247
    248
    249
    250
    251
    252
    253
    254
    255
    256
    257
    258
    259
    260
    261
    262
    263
    264
    265
    266
    267
    268
    269
    270
    271
    272
    273
    274
    275
    276
    277
    278
    279
    package main
    
    import (
    	"fmt"
    	"github.com/huaweicloud/huaweicloud-sdk-go-v3/core/auth/basic"
        mrs "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/mrs/v1"
    	"github.com/huaweicloud/huaweicloud-sdk-go-v3/services/mrs/v1/model"
        region "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/mrs/v1/region"
    )
    
    func main() {
        // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
        // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
        ak := os.Getenv("CLOUD_SDK_AK")
        sk := os.Getenv("CLOUD_SDK_SK")
    
        auth := basic.NewCredentialsBuilder().
            WithAk(ak).
            WithSk(sk).
            Build()
    
        client := mrs.NewMrsClient(
            mrs.MrsClientBuilder().
                WithRegion(region.ValueOf("<YOUR REGION>")).
                WithCredential(auth).
                Build())
    
        request := &model.CreateClusterRequest{}
    	var listTagsbody = []model.Tag{
            {
                Key: "key1",
                Value: "value1",
            },
            {
                Key: "key2",
                Value: "value2",
            },
        }
    	var listActionStagesBootstrapScripts = []model.BootstrapScriptActionStages{
            model.GetBootstrapScriptActionStagesEnum().AFTER_SCALE_IN,
    	    model.GetBootstrapScriptActionStagesEnum().AFTER_SCALE_OUT,
        }
    	var listNodesBootstrapScripts = []string{
            "master",
        }
    	var listActionStagesBootstrapScripts1 = []model.BootstrapScriptActionStages{
            model.GetBootstrapScriptActionStagesEnum().BEFORE_COMPONENT_FIRST_START,
    	    model.GetBootstrapScriptActionStagesEnum().BEFORE_SCALE_IN,
        }
    	var listNodesBootstrapScripts1 = []string{
            "master",
    	    "core",
    	    "task",
        }
    	parametersBootstrapScripts:= "param1param2"
    	activeMasterBootstrapScripts:= false
    	beforeComponentStartBootstrapScripts:= true
    	startTimeBootstrapScripts:= int64(1667892101)
    	stateBootstrapScripts:= model.GetBootstrapScriptStateEnum().IN_PROGRESS
    	parametersBootstrapScripts1:= ""
    	activeMasterBootstrapScripts1:= true
    	beforeComponentStartBootstrapScripts1:= false
    	startTimeBootstrapScripts1:= int64(1667892101)
    	stateBootstrapScripts1:= model.GetBootstrapScriptStateEnum().IN_PROGRESS
    	var listBootstrapScriptsbody = []model.BootstrapScript{
            {
                Name: "Modifyosconfig",
                Uri: "s3a: //XXX/modify_os_config.sh",
                Parameters: &parametersBootstrapScripts,
                Nodes: listNodesBootstrapScripts1,
                ActiveMaster: &activeMasterBootstrapScripts,
                FailAction: model.GetBootstrapScriptFailActionEnum().CONTINUE,
                BeforeComponentStart: &beforeComponentStartBootstrapScripts,
                StartTime: &startTimeBootstrapScripts,
                State: &stateBootstrapScripts,
                ActionStages: &listActionStagesBootstrapScripts1,
            },
            {
                Name: "Installzepplin",
                Uri: "s3a: //XXX/zeppelin_install.sh",
                Parameters: &parametersBootstrapScripts1,
                Nodes: listNodesBootstrapScripts,
                ActiveMaster: &activeMasterBootstrapScripts1,
                FailAction: model.GetBootstrapScriptFailActionEnum().CONTINUE,
                BeforeComponentStart: &beforeComponentStartBootstrapScripts1,
                StartTime: &startTimeBootstrapScripts1,
                State: &stateBootstrapScripts1,
                ActionStages: &listActionStagesBootstrapScripts,
            },
        }
    	var listNodesExecScripts = []string{
            "master",
    	    "core",
    	    "task",
        }
    	var listNodesExecScripts1 = []string{
            "master",
    	    "core",
    	    "task",
        }
    	parametersExecScripts:= "${mrs_scale_node_num}${mrs_scale_type}xxx"
    	activeMasterExecScripts:= true
    	parametersExecScripts1:= "${mrs_scale_node_hostnames}${mrs_scale_node_ips}"
    	activeMasterExecScripts1:= true
    	var listExecScriptsAutoScalingPolicy = []model.ScaleScript{
            {
                Name: "before_scale_out",
                Uri: "s3a: //XXX/zeppelin_install.sh",
                Parameters: &parametersExecScripts,
                Nodes: listNodesExecScripts1,
                ActiveMaster: &activeMasterExecScripts,
                FailAction: model.GetScaleScriptFailActionEnum().CONTINUE,
                ActionStage: model.GetScaleScriptActionStageEnum().BEFORE_SCALE_OUT,
            },
            {
                Name: "after_scale_out",
                Uri: "s3a: //XXX/storm_rebalance.sh",
                Parameters: &parametersExecScripts1,
                Nodes: listNodesExecScripts,
                ActiveMaster: &activeMasterExecScripts1,
                FailAction: model.GetScaleScriptFailActionEnum().CONTINUE,
                ActionStage: model.GetScaleScriptActionStageEnum().AFTER_SCALE_OUT,
            },
        }
    	comparisonOperatorTrigger:= "GT"
    	triggerRules := &model.Trigger{
    		MetricName: "YARNMemoryAvailablePercentage",
    		MetricValue: "70",
    		ComparisonOperator: &comparisonOperatorTrigger,
    		EvaluationPeriods: int32(10),
    	}
    	comparisonOperatorTrigger1:= "LT"
    	triggerRules1 := &model.Trigger{
    		MetricName: "YARNMemoryAvailablePercentage",
    		MetricValue: "25",
    		ComparisonOperator: &comparisonOperatorTrigger1,
    		EvaluationPeriods: int32(10),
    	}
    	var listRulesAutoScalingPolicy = []model.Rule{
            {
                Name: "default-expand-1",
                AdjustmentType: model.GetRuleAdjustmentTypeEnum().SCALE_OUT,
                CoolDownMinutes: int32(5),
                ScalingAdjustment: int32(1),
                Trigger: triggerRules1,
            },
            {
                Name: "default-shrink-1",
                AdjustmentType: model.GetRuleAdjustmentTypeEnum().SCALE_IN,
                CoolDownMinutes: int32(5),
                ScalingAdjustment: int32(1),
                Trigger: triggerRules,
            },
        }
    	var listResourcesPlansAutoScalingPolicy = []model.ResourcesPlan{
            {
                PeriodType: "daily",
                StartTime: "9: 50",
                EndTime: "10: 20",
                MinCapacity: int32(2),
                MaxCapacity: int32(3),
            },
            {
                PeriodType: "daily",
                StartTime: "10: 20",
                EndTime: "12: 30",
                MinCapacity: int32(0),
                MaxCapacity: int32(2),
            },
        }
    	autoScalingPolicyTaskNodeGroups := &model.AutoScalingPolicy{
    		AutoScalingEnable: true,
    		MinCapacity: int32(1),
    		MaxCapacity: int32(3),
    		ResourcesPlans: &listResourcesPlansAutoScalingPolicy,
    		Rules: &listRulesAutoScalingPolicy,
    		ExecScripts: &listExecScriptsAutoScalingPolicy,
    	}
    	var listTaskNodeGroupsbody = []model.TaskNodeGroup{
            {
                NodeNum: int32(2),
                NodeSize: "s3.xlarge.2.linux.bigdata",
                DataVolumeType: model.GetTaskNodeGroupDataVolumeTypeEnum().SATA,
                DataVolumeCount: int32(1),
                DataVolumeSize: int32(600),
                AutoScalingPolicy: autoScalingPolicyTaskNodeGroups,
            },
        }
    	jarPathAddJobs:= "s3a: //bigdata/program/hadoop-mapreduce-examples-2.7.2.jar"
    	argumentsAddJobs:= "wordcount"
    	inputAddJobs:= "s3a: //bigdata/input/wd_1k/"
    	outputAddJobs:= "s3a: //bigdata/ouput/"
    	jobLogAddJobs:= "s3a: //bigdata/log/"
    	hiveScriptPathAddJobs:= ""
    	hqlAddJobs:= ""
    	shutdownClusterAddJobs:= true
    	fileActionAddJobs:= ""
    	var listAddJobsbody = []model.AddJobsReqV11{
            {
                JobType: int32(1),
                JobName: "tenji111",
                JarPath: &jarPathAddJobs,
                Arguments: &argumentsAddJobs,
                Input: &inputAddJobs,
                Output: &outputAddJobs,
                JobLog: &jobLogAddJobs,
                HiveScriptPath: &hiveScriptPathAddJobs,
                Hql: &hqlAddJobs,
                ShutdownCluster: &shutdownClusterAddJobs,
                SubmitJobOnceClusterRun: true,
                FileAction: &fileActionAddJobs,
            },
        }
    	var listComponentListbody = []model.ComponentAmbV11{
            {
                ComponentName: "Hadoop",
            },
            {
                ComponentName: "Spark",
            },
            {
                ComponentName: "HBase",
            },
            {
                ComponentName: "Hive",
            },
        }
    	logCollectionCreateClusterReqV11:= model.GetCreateClusterReqV11LogCollectionEnum().E_1
    	clusterTypeCreateClusterReqV11:= model.GetCreateClusterReqV11ClusterTypeEnum().E_0
    	nodePublicCertNameCreateClusterReqV11:= "SSHkey-bba1"
    	coreDataVolumeCountCreateClusterReqV11:= int32(2)
    	coreDataVolumeSizeCreateClusterReqV11:= int32(600)
    	coreDataVolumeTypeCreateClusterReqV11:= model.GetCreateClusterReqV11CoreDataVolumeTypeEnum().SATA
    	masterDataVolumeCountCreateClusterReqV11:= model.GetCreateClusterReqV11MasterDataVolumeCountEnum().E_1
    	masterDataVolumeSizeCreateClusterReqV11:= int32(600)
    	masterDataVolumeTypeCreateClusterReqV11:= model.GetCreateClusterReqV11MasterDataVolumeTypeEnum().SATA
    	securityGroupsIdCreateClusterReqV11:= "845bece1-fd22-4b45-7a6e-14338c99ee43"
    	coreNodeSizeCreateClusterReqV11:= "s1.xlarge.linux.bigdata"
    	masterNodeSizeCreateClusterReqV11:= "s3.2xlarge.2.linux.bigdata"
    	coreNodeNumCreateClusterReqV11:= int32(3)
    	masterNodeNumCreateClusterReqV11:= int32(2)
    	request.Body = &model.CreateClusterReqV11{
    		Tags: &listTagsbody,
    		LogCollection: &logCollectionCreateClusterReqV11,
    		ClusterType: &clusterTypeCreateClusterReqV11,
    		SafeMode: model.GetCreateClusterReqV11SafeModeEnum().E_0,
    		NodePublicCertName: &nodePublicCertNameCreateClusterReqV11,
    		BootstrapScripts: &listBootstrapScriptsbody,
    		TaskNodeGroups: &listTaskNodeGroupsbody,
    		CoreDataVolumeCount: &coreDataVolumeCountCreateClusterReqV11,
    		CoreDataVolumeSize: &coreDataVolumeSizeCreateClusterReqV11,
    		CoreDataVolumeType: &coreDataVolumeTypeCreateClusterReqV11,
    		MasterDataVolumeCount: &masterDataVolumeCountCreateClusterReqV11,
    		MasterDataVolumeSize: &masterDataVolumeSizeCreateClusterReqV11,
    		MasterDataVolumeType: &masterDataVolumeTypeCreateClusterReqV11,
    		AddJobs: &listAddJobsbody,
    		SecurityGroupsId: &securityGroupsIdCreateClusterReqV11,
    		SubnetName: "subnet",
    		SubnetId: "815bece0-fd22-4b65-8a6e-15788c99ee43",
    		VpcId: "5b7db34d-3534-4a6e-ac94-023cd36aaf74",
    		AvailableZoneId: "d573142f24894ef3bd3664de068b44b0",
    		ComponentList: listComponentListbody,
    		CoreNodeSize: &coreNodeSizeCreateClusterReqV11,
    		MasterNodeSize: &masterNodeSizeCreateClusterReqV11,
    		Vpc: "vpc1",
    		DataCenter: "",
    		BillingType: model.GetCreateClusterReqV11BillingTypeEnum().E_12,
    		CoreNodeNum: &coreNodeNumCreateClusterReqV11,
    		MasterNodeNum: &masterNodeNumCreateClusterReqV11,
    		ClusterName: "newcluster",
    		ClusterVersion: "MRS 3.1.0",
    	}
    	response, err := client.CreateCluster(request)
    	if err == nil {
            fmt.Printf("%+v\n", response)
        } else {
            fmt.Println(err)
        }
    }
    
  • 使用node_groups参数组,创建一个关闭“集群高可用”功能、最小规格的集群,集群版本号为MRS 3.1.0。

      1
      2
      3
      4
      5
      6
      7
      8
      9
     10
     11
     12
     13
     14
     15
     16
     17
     18
     19
     20
     21
     22
     23
     24
     25
     26
     27
     28
     29
     30
     31
     32
     33
     34
     35
     36
     37
     38
     39
     40
     41
     42
     43
     44
     45
     46
     47
     48
     49
     50
     51
     52
     53
     54
     55
     56
     57
     58
     59
     60
     61
     62
     63
     64
     65
     66
     67
     68
     69
     70
     71
     72
     73
     74
     75
     76
     77
     78
     79
     80
     81
     82
     83
     84
     85
     86
     87
     88
     89
     90
     91
     92
     93
     94
     95
     96
     97
     98
     99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    159
    160
    161
    162
    163
    164
    165
    166
    167
    168
    169
    170
    171
    172
    173
    174
    175
    176
    177
    178
    179
    180
    181
    182
    183
    184
    185
    186
    187
    188
    189
    190
    191
    192
    193
    194
    195
    196
    197
    198
    199
    200
    201
    202
    203
    204
    205
    206
    207
    208
    209
    210
    211
    212
    213
    214
    package main
    
    import (
    	"fmt"
    	"github.com/huaweicloud/huaweicloud-sdk-go-v3/core/auth/basic"
        mrs "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/mrs/v1"
    	"github.com/huaweicloud/huaweicloud-sdk-go-v3/services/mrs/v1/model"
        region "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/mrs/v1/region"
    )
    
    func main() {
        // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
        // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
        ak := os.Getenv("CLOUD_SDK_AK")
        sk := os.Getenv("CLOUD_SDK_SK")
    
        auth := basic.NewCredentialsBuilder().
            WithAk(ak).
            WithSk(sk).
            Build()
    
        client := mrs.NewMrsClient(
            mrs.MrsClientBuilder().
                WithRegion(region.ValueOf("<YOUR REGION>")).
                WithCredential(auth).
                Build())
    
        request := &model.CreateClusterRequest{}
    	rootVolumeSizeNodeGroups:= "480"
    	rootVolumeTypeNodeGroups:= "SATA"
    	dataVolumeTypeNodeGroups:= "SATA"
    	dataVolumeCountNodeGroups:= int32(1)
    	dataVolumeSizeNodeGroups:= int32(600)
    	rootVolumeSizeNodeGroups1:= "480"
    	rootVolumeTypeNodeGroups1:= "SATA"
    	dataVolumeTypeNodeGroups1:= "SATA"
    	dataVolumeCountNodeGroups1:= int32(1)
    	dataVolumeSizeNodeGroups1:= int32(600)
    	var listNodeGroupsbody = []model.NodeGroupV11{
            {
                GroupName: "master_node_default_group",
                NodeNum: int32(1),
                NodeSize: "s3.xlarge.2.linux.bigdata",
                RootVolumeSize: &rootVolumeSizeNodeGroups,
                RootVolumeType: &rootVolumeTypeNodeGroups,
                DataVolumeType: &dataVolumeTypeNodeGroups,
                DataVolumeCount: &dataVolumeCountNodeGroups,
                DataVolumeSize: &dataVolumeSizeNodeGroups,
            },
            {
                GroupName: "core_node_analysis_group",
                NodeNum: int32(1),
                NodeSize: "s3.xlarge.2.linux.bigdata",
                RootVolumeSize: &rootVolumeSizeNodeGroups1,
                RootVolumeType: &rootVolumeTypeNodeGroups1,
                DataVolumeType: &dataVolumeTypeNodeGroups1,
                DataVolumeCount: &dataVolumeCountNodeGroups1,
                DataVolumeSize: &dataVolumeSizeNodeGroups1,
            },
        }
    	var listTagsbody = []model.Tag{
            {
                Key: "key1",
                Value: "value1",
            },
            {
                Key: "key2",
                Value: "value2",
            },
        }
    	var listActionStagesBootstrapScripts = []model.BootstrapScriptActionStages{
            model.GetBootstrapScriptActionStagesEnum().AFTER_SCALE_IN,
    	    model.GetBootstrapScriptActionStagesEnum().AFTER_SCALE_OUT,
        }
    	var listNodesBootstrapScripts = []string{
            "master",
        }
    	var listActionStagesBootstrapScripts1 = []model.BootstrapScriptActionStages{
            model.GetBootstrapScriptActionStagesEnum().BEFORE_COMPONENT_FIRST_START,
    	    model.GetBootstrapScriptActionStagesEnum().BEFORE_SCALE_IN,
        }
    	var listNodesBootstrapScripts1 = []string{
            "master",
    	    "core",
    	    "task",
        }
    	parametersBootstrapScripts:= "param1 param2"
    	activeMasterBootstrapScripts:= false
    	beforeComponentStartBootstrapScripts:= true
    	startTimeBootstrapScripts:= int64(1667892101)
    	stateBootstrapScripts:= model.GetBootstrapScriptStateEnum().IN_PROGRESS
    	parametersBootstrapScripts1:= ""
    	activeMasterBootstrapScripts1:= true
    	beforeComponentStartBootstrapScripts1:= false
    	startTimeBootstrapScripts1:= int64(1667892101)
    	stateBootstrapScripts1:= model.GetBootstrapScriptStateEnum().IN_PROGRESS
    	var listBootstrapScriptsbody = []model.BootstrapScript{
            {
                Name: "Modify os config",
                Uri: "s3a://XXX/modify_os_config.sh",
                Parameters: &parametersBootstrapScripts,
                Nodes: listNodesBootstrapScripts1,
                ActiveMaster: &activeMasterBootstrapScripts,
                FailAction: model.GetBootstrapScriptFailActionEnum().CONTINUE,
                BeforeComponentStart: &beforeComponentStartBootstrapScripts,
                StartTime: &startTimeBootstrapScripts,
                State: &stateBootstrapScripts,
                ActionStages: &listActionStagesBootstrapScripts1,
            },
            {
                Name: "Install zepplin",
                Uri: "s3a://XXX/zeppelin_install.sh",
                Parameters: &parametersBootstrapScripts1,
                Nodes: listNodesBootstrapScripts,
                ActiveMaster: &activeMasterBootstrapScripts1,
                FailAction: model.GetBootstrapScriptFailActionEnum().CONTINUE,
                BeforeComponentStart: &beforeComponentStartBootstrapScripts1,
                StartTime: &startTimeBootstrapScripts1,
                State: &stateBootstrapScripts1,
                ActionStages: &listActionStagesBootstrapScripts,
            },
        }
    	jarPathAddJobs:= "s3a://bigdata/program/hadoop-mapreduce-examples-2.7.2.jar"
    	argumentsAddJobs:= "wordcount"
    	inputAddJobs:= "s3a://bigdata/input/wd_1k/"
    	outputAddJobs:= "s3a://bigdata/ouput/"
    	jobLogAddJobs:= "s3a://bigdata/log/"
    	hiveScriptPathAddJobs:= ""
    	hqlAddJobs:= ""
    	shutdownClusterAddJobs:= true
    	fileActionAddJobs:= ""
    	var listAddJobsbody = []model.AddJobsReqV11{
            {
                JobType: int32(1),
                JobName: "tenji111",
                JarPath: &jarPathAddJobs,
                Arguments: &argumentsAddJobs,
                Input: &inputAddJobs,
                Output: &outputAddJobs,
                JobLog: &jobLogAddJobs,
                HiveScriptPath: &hiveScriptPathAddJobs,
                Hql: &hqlAddJobs,
                ShutdownCluster: &shutdownClusterAddJobs,
                SubmitJobOnceClusterRun: true,
                FileAction: &fileActionAddJobs,
            },
        }
    	var listComponentListbody = []model.ComponentAmbV11{
            {
                ComponentName: "Hadoop",
            },
            {
                ComponentName: "Spark",
            },
            {
                ComponentName: "HBase",
            },
            {
                ComponentName: "Hive",
            },
            {
                ComponentName: "Presto",
            },
            {
                ComponentName: "Tez",
            },
            {
                ComponentName: "Hue",
            },
            {
                ComponentName: "Loader",
            },
            {
                ComponentName: "Flink",
            },
        }
    	loginModeCreateClusterReqV11:= model.GetCreateClusterReqV11LoginModeEnum().E_1
    	enterpriseProjectIdCreateClusterReqV11:= "0"
    	logCollectionCreateClusterReqV11:= model.GetCreateClusterReqV11LogCollectionEnum().E_1
    	clusterTypeCreateClusterReqV11:= model.GetCreateClusterReqV11ClusterTypeEnum().E_0
    	clusterMasterSecretCreateClusterReqV11:= ""
    	clusterAdminSecretCreateClusterReqV11:= ""
    	securityGroupsIdCreateClusterReqV11:= "4820eace-66ad-4f2c-8d46-cf340e3029dd"
    	request.Body = &model.CreateClusterReqV11{
    		NodeGroups: &listNodeGroupsbody,
    		LoginMode: &loginModeCreateClusterReqV11,
    		Tags: &listTagsbody,
    		EnterpriseProjectId: &enterpriseProjectIdCreateClusterReqV11,
    		LogCollection: &logCollectionCreateClusterReqV11,
    		ClusterType: &clusterTypeCreateClusterReqV11,
    		SafeMode: model.GetCreateClusterReqV11SafeModeEnum().E_0,
    		ClusterMasterSecret: &clusterMasterSecretCreateClusterReqV11,
    		ClusterAdminSecret: &clusterAdminSecretCreateClusterReqV11,
    		BootstrapScripts: &listBootstrapScriptsbody,
    		AddJobs: &listAddJobsbody,
    		SecurityGroupsId: &securityGroupsIdCreateClusterReqV11,
    		SubnetName: "subnet-4b44",
    		SubnetId: "67984709-e15e-4e86-9886-d76712d4e00a",
    		VpcId: "4a365717-67be-4f33-80c5-98e98a813af8",
    		AvailableZoneId: "d573142f24894ef3bd3664de068b44b0",
    		ComponentList: listComponentListbody,
    		Vpc: "vpc-4b1c",
    		DataCenter: "",
    		BillingType: model.GetCreateClusterReqV11BillingTypeEnum().E_12,
    		ClusterName: "mrs_HEbK",
    		ClusterVersion: "MRS 3.1.0",
    	}
    	response, err := client.CreateCluster(request)
    	if err == nil {
            fmt.Printf("%+v\n", response)
        } else {
            fmt.Println(err)
        }
    }
    
  • 不使用node_groups参数组,创建一个关闭“集群高可用”功能、最小规格的集群,集群版本号为MRS 3.1.0。

      1
      2
      3
      4
      5
      6
      7
      8
      9
     10
     11
     12
     13
     14
     15
     16
     17
     18
     19
     20
     21
     22
     23
     24
     25
     26
     27
     28
     29
     30
     31
     32
     33
     34
     35
     36
     37
     38
     39
     40
     41
     42
     43
     44
     45
     46
     47
     48
     49
     50
     51
     52
     53
     54
     55
     56
     57
     58
     59
     60
     61
     62
     63
     64
     65
     66
     67
     68
     69
     70
     71
     72
     73
     74
     75
     76
     77
     78
     79
     80
     81
     82
     83
     84
     85
     86
     87
     88
     89
     90
     91
     92
     93
     94
     95
     96
     97
     98
     99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    159
    160
    161
    162
    163
    164
    165
    166
    167
    168
    169
    170
    171
    172
    173
    174
    175
    package main
    
    import (
    	"fmt"
    	"github.com/huaweicloud/huaweicloud-sdk-go-v3/core/auth/basic"
        mrs "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/mrs/v1"
    	"github.com/huaweicloud/huaweicloud-sdk-go-v3/services/mrs/v1/model"
        region "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/mrs/v1/region"
    )
    
    func main() {
        // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
        // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
        ak := os.Getenv("CLOUD_SDK_AK")
        sk := os.Getenv("CLOUD_SDK_SK")
    
        auth := basic.NewCredentialsBuilder().
            WithAk(ak).
            WithSk(sk).
            Build()
    
        client := mrs.NewMrsClient(
            mrs.MrsClientBuilder().
                WithRegion(region.ValueOf("<YOUR REGION>")).
                WithCredential(auth).
                Build())
    
        request := &model.CreateClusterRequest{}
    	var listTagsbody = []model.Tag{
            {
                Key: "key1",
                Value: "value1",
            },
            {
                Key: "key2",
                Value: "value2",
            },
        }
    	var listActionStagesBootstrapScripts = []model.BootstrapScriptActionStages{
            model.GetBootstrapScriptActionStagesEnum().AFTER_SCALE_IN,
    	    model.GetBootstrapScriptActionStagesEnum().AFTER_SCALE_OUT,
        }
    	var listNodesBootstrapScripts = []string{
            "master",
        }
    	parametersBootstrapScripts:= ""
    	activeMasterBootstrapScripts:= false
    	beforeComponentStartBootstrapScripts:= false
    	startTimeBootstrapScripts:= int64(1667892101)
    	stateBootstrapScripts:= model.GetBootstrapScriptStateEnum().IN_PROGRESS
    	var listBootstrapScriptsbody = []model.BootstrapScript{
            {
                Name: "Install zepplin",
                Uri: "s3a://XXX/zeppelin_install.sh",
                Parameters: &parametersBootstrapScripts,
                Nodes: listNodesBootstrapScripts,
                ActiveMaster: &activeMasterBootstrapScripts,
                FailAction: model.GetBootstrapScriptFailActionEnum().CONTINUE,
                BeforeComponentStart: &beforeComponentStartBootstrapScripts,
                StartTime: &startTimeBootstrapScripts,
                State: &stateBootstrapScripts,
                ActionStages: &listActionStagesBootstrapScripts,
            },
        }
    	jarPathAddJobs:= "s3a://bigdata/program/hadoop-mapreduce-examples-XXX.jar"
    	argumentsAddJobs:= "wordcount"
    	inputAddJobs:= "s3a://bigdata/input/wd_1k/"
    	outputAddJobs:= "s3a://bigdata/ouput/"
    	jobLogAddJobs:= "s3a://bigdata/log/"
    	hiveScriptPathAddJobs:= ""
    	hqlAddJobs:= ""
    	shutdownClusterAddJobs:= false
    	fileActionAddJobs:= ""
    	var listAddJobsbody = []model.AddJobsReqV11{
            {
                JobType: int32(1),
                JobName: "tenji111",
                JarPath: &jarPathAddJobs,
                Arguments: &argumentsAddJobs,
                Input: &inputAddJobs,
                Output: &outputAddJobs,
                JobLog: &jobLogAddJobs,
                HiveScriptPath: &hiveScriptPathAddJobs,
                Hql: &hqlAddJobs,
                ShutdownCluster: &shutdownClusterAddJobs,
                SubmitJobOnceClusterRun: true,
                FileAction: &fileActionAddJobs,
            },
        }
    	var listComponentListbody = []model.ComponentAmbV11{
            {
                ComponentName: "Hadoop",
            },
            {
                ComponentName: "Spark",
            },
            {
                ComponentName: "HBase",
            },
            {
                ComponentName: "Hive",
            },
            {
                ComponentName: "Presto",
            },
            {
                ComponentName: "Tez",
            },
            {
                ComponentName: "Hue",
            },
            {
                ComponentName: "Loader",
            },
            {
                ComponentName: "Flink",
            },
        }
    	loginModeCreateClusterReqV11:= model.GetCreateClusterReqV11LoginModeEnum().E_1
    	enterpriseProjectIdCreateClusterReqV11:= "0"
    	logCollectionCreateClusterReqV11:= model.GetCreateClusterReqV11LogCollectionEnum().E_1
    	clusterTypeCreateClusterReqV11:= model.GetCreateClusterReqV11ClusterTypeEnum().E_0
    	clusterAdminSecretCreateClusterReqV11:= "******"
    	nodePublicCertNameCreateClusterReqV11:= "SSHkey-bba1"
    	coreDataVolumeCountCreateClusterReqV11:= int32(1)
    	coreDataVolumeSizeCreateClusterReqV11:= int32(600)
    	coreDataVolumeTypeCreateClusterReqV11:= model.GetCreateClusterReqV11CoreDataVolumeTypeEnum().SATA
    	masterDataVolumeCountCreateClusterReqV11:= model.GetCreateClusterReqV11MasterDataVolumeCountEnum().E_1
    	masterDataVolumeSizeCreateClusterReqV11:= int32(600)
    	masterDataVolumeTypeCreateClusterReqV11:= model.GetCreateClusterReqV11MasterDataVolumeTypeEnum().SATA
    	securityGroupsIdCreateClusterReqV11:= ""
    	coreNodeSizeCreateClusterReqV11:= "s1.xlarge.linux.bigdata"
    	masterNodeSizeCreateClusterReqV11:= "s3.2xlarge.2.linux.bigdata"
    	coreNodeNumCreateClusterReqV11:= int32(1)
    	masterNodeNumCreateClusterReqV11:= int32(1)
    	request.Body = &model.CreateClusterReqV11{
    		LoginMode: &loginModeCreateClusterReqV11,
    		Tags: &listTagsbody,
    		EnterpriseProjectId: &enterpriseProjectIdCreateClusterReqV11,
    		LogCollection: &logCollectionCreateClusterReqV11,
    		ClusterType: &clusterTypeCreateClusterReqV11,
    		SafeMode: model.GetCreateClusterReqV11SafeModeEnum().E_0,
    		ClusterAdminSecret: &clusterAdminSecretCreateClusterReqV11,
    		NodePublicCertName: &nodePublicCertNameCreateClusterReqV11,
    		BootstrapScripts: &listBootstrapScriptsbody,
    		CoreDataVolumeCount: &coreDataVolumeCountCreateClusterReqV11,
    		CoreDataVolumeSize: &coreDataVolumeSizeCreateClusterReqV11,
    		CoreDataVolumeType: &coreDataVolumeTypeCreateClusterReqV11,
    		MasterDataVolumeCount: &masterDataVolumeCountCreateClusterReqV11,
    		MasterDataVolumeSize: &masterDataVolumeSizeCreateClusterReqV11,
    		MasterDataVolumeType: &masterDataVolumeTypeCreateClusterReqV11,
    		AddJobs: &listAddJobsbody,
    		SecurityGroupsId: &securityGroupsIdCreateClusterReqV11,
    		SubnetName: "subnet",
    		SubnetId: "815bece0-fd22-4b65-8a6e-15788c99ee43",
    		VpcId: "5b7db34d-3534-4a6e-ac94-023cd36aaf74",
    		AvailableZoneId: "d573142f24894ef3bd3664de068b44b0",
    		ComponentList: listComponentListbody,
    		CoreNodeSize: &coreNodeSizeCreateClusterReqV11,
    		MasterNodeSize: &masterNodeSizeCreateClusterReqV11,
    		Vpc: "vpc1",
    		DataCenter: "",
    		BillingType: model.GetCreateClusterReqV11BillingTypeEnum().E_12,
    		CoreNodeNum: &coreNodeNumCreateClusterReqV11,
    		MasterNodeNum: &masterNodeNumCreateClusterReqV11,
    		ClusterName: "newcluster",
    		ClusterVersion: "MRS 3.1.0",
    	}
    	response, err := client.CreateCluster(request)
    	if err == nil {
            fmt.Printf("%+v\n", response)
        } else {
            fmt.Println(err)
        }
    }
    

更多编程语言的SDK代码示例,请参见API Explorer的代码示例页签,可生成自动对应的SDK代码示例。

状态码

状态码

描述

200

创建集群成功。

错误码

请参见错误码

分享:

    相关文档

    相关产品