创建集群并执行作业
功能介绍
创建一个MRS集群,并在集群中提交一个作业。该接口不兼容Sahara。
(建议优先使用创建集群V2接口和创建集群并提交作业V2接口来完成创建集群或创建集群并提交作业的功能)
支持同一时间并发创建10个集群。
使用接口前,您需要先获取下的资源信息。
-
通过VPC创建或查询VPC、子网
-
通过ECS创建或查询密钥对
-
通过终端节点获取区域信息
-
参考MRS服务支持的组件获取MRS版本及对应版本支持的组件信息
接口约束
-
集群登录方式有密码和密钥对两种,两者必选其一。
-
使用密码方式需要配置访问集群节点的root密码,即cluster_master_secret。
-
使用密钥对方式需要配置密钥对名称,即node_public_cert_name。
磁盘参数可以使用volume_type和volume_size表示,也可以使用多磁盘相关的参数(master_data_volume_type、master_data_volume_size、master_data_volume_count、core_data_volume_type、core_data_volume_size和core_data_volume_count)表示,以上两种方式任选一组进行配置。
调用方法
请参见如何调用API。
URI
POST /v1.1/{project_id}/run-job-flow
参数 |
是否必选 |
参数类型 |
描述 |
---|---|---|---|
project_id |
是 |
String |
参数解释: 项目编号。获取方法,请参见获取项目ID。 约束限制: 不涉及 取值范围: 只能由英文字母和数字组成,且长度为[1-64]个字符。 默认取值: 不涉及 |
请求参数
参数 |
是否必选 |
参数类型 |
描述 |
---|---|---|---|
cluster_version |
是 |
String |
参数解释: 集群版本。例如:MRS 3.1.0。 约束限制: 不涉及 取值范围: 不涉及 默认取值: 不涉及 |
cluster_name |
是 |
String |
参数解释: 集群名称,不允许相同。 约束限制: 不涉及 取值范围: 只能由英文字母、数字以及“_”和“-”组成,且长度为[1-64]个字符。 默认取值: 不涉及 |
master_node_num |
否 |
Integer |
参数解释: Master节点数量。 约束限制: 启用集群高可用功能时配置为2,不启用集群高可用功能时配置为1。MRS 3.x版本暂时不支持该参数配置为1。 取值范围: 不涉及 默认取值: 不涉及 |
core_node_num |
否 |
Integer |
参数解释: Core节点数量。Core节点默认的最大值为500,如果用户需要的Core节点数大于500,请申请扩大配额。 约束限制: 不涉及 取值范围: 1-500 默认取值: 不涉及 |
billing_type |
是 |
Integer |
参数解释: 集群的计费模式。 约束限制: 不涉及 取值范围: 12:表示按需计费。接口调用仅支持创建按需计费集群。 默认取值: 不涉及 |
data_center |
是 |
String |
参数解释: 集群区域信息,请参见终端节点及区域。 约束限制: 不涉及 取值范围: 不涉及 默认取值: 不涉及 |
vpc |
是 |
String |
参数解释: 子网所在VPC名称。通过VPC管理控制台获取名称。
在“虚拟私有云”页面的列表中即可获取VPC名称。 约束限制: 不涉及 取值范围: 不涉及 默认取值: 不涉及 |
master_node_size |
否 |
String |
参数解释: Master节点的实例规格,例如:{ECS_FLAVOR_NAME}.linux.bigdata,{ECS_FLAVOR_NAME}可以为c3.4xlare.2等在MRS购买页可见的云服务器规格。当前支持主机规格的配型由CPU+内存+Disk共同决定。实例规格详细说明请参见MRS所使用的弹性云服务器规格和MRS所使用的裸金属服务器规格。 该参数建议从MRS控制台的集群创建页面获取对应区域对应版本所支持的规格。 约束限制: 不涉及 取值范围: 不涉及 默认取值: 不涉及 |
core_node_size |
否 |
String |
参数解释: Core节点的实例规格,例如:{ECS_FLAVOR_NAME}.linux.bigdata,{ECS_FLAVOR_NAME}可以为c3.4xlare.2等在MRS购买页可见的云服务器规格。实例规格详细说明请参见MRS所使用的弹性云服务器规格和MRS所使用的裸金属服务器规格。 该参数建议从MRS控制台的集群创建页面获取对应区域对应版本所支持的规格。 约束限制: 不涉及 取值范围: 不涉及 默认取值: 不涉及 |
component_list |
是 |
Array of ComponentAmbV11 objects |
参数解释: 服务组件安装列表信息。 约束限制: 不涉及 取值范围: 不涉及 默认取值: 不涉及 |
available_zone_id |
是 |
String |
参数解释: 可用分区ID。以下仅包含部分可用区ID。更多局点可通过查询可用区信息接口来获取各可用分区的ID。 约束限制: 不涉及 取值范围:
默认取值: 不涉及 |
vpc_id |
是 |
String |
参数解释: 子网所在VPC ID。通过VPC管理控制台获取ID。
在“虚拟私有云”页面的列表中即可获取VPC ID。 约束限制: 不涉及 取值范围: 不涉及 默认取值: 不涉及 |
subnet_id |
是 |
String |
参数解释: 子网ID。通过VPC管理控制台获取子网ID。
约束限制: “subnet_id”和“subnet_name”必须至少填写一个,当这两个参数同时配置但是不匹配同一个子网时,集群会创建失败,请仔细填写参数。推荐使用“subnet_id”。 取值范围: 不涉及 默认取值: 不涉及 |
subnet_name |
是 |
String |
参数解释: 子网名称。 通过VPC管理控制台获取子网名称:
约束限制: “subnet_id”和“subnet_name”必须至少填写一个,当这两个参数同时配置但是不匹配同一个子网时,集群会创建失败,请仔细填写参数。当仅填写“subnet_name”一个参数且VPC下存在同名子网时,创建集群时以VPC平台第一个名称的子网为准。推荐使用“subnet_id”。 取值范围: 不涉及 默认取值: 不涉及 |
security_groups_id |
否 |
String |
参数解释: 集群安全组的ID。
约束限制: 不涉及 取值范围: 不涉及 默认取值: 不涉及 |
add_jobs |
否 |
Array of AddJobsReqV11 objects |
参数解释: 创建集群时可同时提交作业,当前版本暂时只支持新增一个作业。 约束限制: 不能超过1条。 取值范围: 不涉及 默认取值: 不涉及 |
volume_size |
否 |
Integer |
参数解释: Master和Core节点数据磁盘存储空间,单位为GB。为增大数据存储容量,创建集群时可同时添加磁盘。可以根据如下应用场景合理选择磁盘存储空间大小:
约束限制: 不建议使用该参数,详情请参考volume_type参数的说明。 取值范围: 100-32000 默认取值: 不涉及 |
volume_type |
否 |
String |
参数解释: Master和Core节点的磁盘存储类别,目前支持SATA、SAS、SSD和GPSSD。磁盘参数可以使用volume_type和volume_size表示,也可以使用多磁盘相关的参数表示。volume_type和volume_size这两个参数如果与多磁盘参数同时出现,系统优先读取volume_type和volume_size参数。建议使用多磁盘参数。 约束限制: 不涉及 取值范围:
默认取值: 不涉及 |
master_data_volume_type |
否 |
String |
参数解释: 该参数为多磁盘参数,表示Master节点数据磁盘存储类别,目前支持SATA、SAS、SSD和GPSSD。 约束限制: 不涉及 取值范围:
默认取值: 不涉及 |
master_data_volume_size |
否 |
Integer |
参数解释: 该参数为多磁盘参数,表示Master节点数据磁盘存储空间。为增大数据存储容量,创建集群时可同时添加磁盘。传值只需填数字,不需要带单位GB。 约束限制: 不涉及 取值范围: 100-32000 默认取值: 不涉及 |
master_data_volume_count |
否 |
Integer |
参数解释: 该参数为多磁盘参数,表示Master节点数据磁盘个数。 约束限制: 不涉及 取值范围: 只能为1。 默认取值: 1 |
core_data_volume_type |
否 |
String |
参数解释: 该参数为多磁盘参数,表示Core节点数据磁盘存储类别,目前支持SATA、SAS、SSD和GPSSD。 约束限制: 不涉及 取值范围:
默认取值: 不涉及 |
core_data_volume_size |
否 |
Integer |
参数解释: 该参数为多磁盘参数,表示Core节点数据磁盘存储空间。为增大数据存储容量,创建集群时可同时添加磁盘。传值只需填数字,不需要带单位GB。 约束限制: 不涉及 取值范围: 100-32000 默认取值: 不涉及 |
core_data_volume_count |
否 |
Integer |
参数解释: 该参数为多磁盘参数,表示Core节点数据磁盘个数。 约束限制: 不涉及 取值范围: 1-20 默认取值: 不涉及 |
task_node_groups |
否 |
Array of TaskNodeGroup objects |
参数解释: Task节点列表信息。 约束限制: 不能超过1条。 取值范围: 不涉及 默认取值: 不涉及 |
bootstrap_scripts |
否 |
Array of BootstrapScript objects |
参数解释: 配置引导操作脚本信息。 约束限制: 不涉及 取值范围: 不涉及 默认取值: 不涉及 |
node_public_cert_name |
否 |
String |
参数解释: 密钥对名称。用户可以使用密钥对方式登录集群节点。 约束限制: 当“login_mode”配置为“1”时,请求消息体中包含node_public_cert_name字段。 取值范围: 不涉及 默认取值: 不涉及 |
cluster_admin_secret |
否 |
String |
参数解释: 配置MRS Manager管理员用户的密码。 约束限制: 不涉及 取值范围:
默认取值: 不涉及 |
cluster_master_secret |
否 |
String |
参数解释: 配置访问集群节点的root密码。 约束限制: 当“login_mode”配置为“0”时,请求消息体中包含cluster_master_secret字段。 取值范围: 密码设置约束如下:
默认取值: 不涉及 |
safe_mode |
是 |
Integer |
参数解释: MRS集群运行模式。 约束限制: 不涉及 取值范围:
默认取值: 不涉及 |
cluster_type |
否 |
Integer |
参数解释: 集群类型。暂不支持通过接口方式创建混合集群。 约束限制: 不涉及 取值范围:
默认取值: 0 |
log_collection |
否 |
Integer |
参数解释: 集群创建失败时,是否收集失败日志。默认设置为1,将创建OBS桶仅用于MRS集群创建失败时的日志收集。 约束限制: 不涉及 取值范围:
默认取值: 1 |
enterprise_project_id |
否 |
String |
参数解释: 企业项目ID。创建集群时,给集群绑定企业项目ID。默认设置为0,表示为default企业项目。获取方式请参见《企业管理API参考》的“查询企业项目列表”响应消息表“enterprise_project字段数据结构说明”的“id”。 约束限制: 不涉及 取值范围: 不涉及 默认取值: 0 |
tags |
否 |
Array of Tag objects |
参数解释: 集群的标签信息。 约束限制: 同一个集群最多能使用20个tag,tag的名称(key)不能重复标签的键/值可以包含任意语种字母、数字、空格和_.:=+-@,但首尾不能含有空格,不能以_sys_开头。 取值范围: 不涉及 默认取值: 不涉及 |
login_mode |
否 |
Integer |
参数解释: 集群登录方式。 约束限制:
取值范围:
默认取值: 1 |
node_groups |
否 |
Array of NodeGroupV11 objects |
参数解释: 节点列表信息。 约束限制: 如下参数和该参数任选一组进行配置即可。 master_node_num、master_node_size、core_node_num、core_node_size、master_data_volume_type、master_data_volume_size、master_data_volume_count、core_data_volume_type、core_data_volume_size、core_data_volume_count、volume_type、volume_size、task_node_groups。 取值范围: 不涉及 默认取值: 不涉及 |
参数 |
是否必选 |
参数类型 |
描述 |
---|---|---|---|
component_name |
是 |
String |
参数解释: 组件名称。 约束限制: 不涉及 取值范围: 只能由英文字母、数字以及“_”和“-”组成,且长度为[1-64]个字符。 默认取值: 不涉及 |
参数 |
是否必选 |
参数类型 |
描述 |
---|---|---|---|
job_type |
是 |
Integer |
参数解释: 作业类型码。 约束限制: 不涉及 取值范围:
默认取值: 不涉及 |
job_name |
是 |
String |
参数解释: 作业名称。 约束限制: 不涉及 取值范围: 只能由英文字母、数字以及“_”和“-”组成,且长度为[1-64]个字符。不同作业的名称允许相同,但不建议设置相同。 默认取值: 不涉及 |
jar_path |
否 |
String |
参数解释: 执行程序Jar包或sql文件地址。 约束限制: 不涉及 取值范围: 需要满足如下要求:
默认取值: 不涉及 |
arguments |
否 |
String |
参数解释: 程序执行的关键参数,该参数由用户程序内的函数指定,MRS只负责参数的传入。 约束限制: 不涉及 取值范围: 最多为150000字符,不能包含;|&>'<$特殊字符,可为空。 默认取值: 不涉及 |
input |
否 |
String |
参数解释: 数据输入地址。文件可存储于HDFS或者OBS中,不同的文件系统对应的路径存在差异。
约束限制: 不涉及 取值范围: 最多为1023字符,不能包含;|&>'<$特殊字符,可为空。 默认取值: 不涉及 |
output |
否 |
String |
参数解释: 数据输出地址。文件可存储于HDFS或者OBS中,不同的文件系统对应的路径存在差异。
如果该路径不存在,系统会自动创建。 约束限制: 不涉及 取值范围: 最多为1023字符,不能包含;|&>'<$特殊字符,可为空。 默认取值: 不涉及 |
job_log |
否 |
String |
参数解释: 作业日志存储地址,该日志信息记录作业运行状态。文件可存储于HDFS或者OBS中,不同的文件系统对应的路径存在差异。
约束限制: 不涉及 取值范围: 最多为1023字符,不能包含;|&>'<$特殊字符,可为空。 默认取值: 不涉及 |
hive_script_path |
否 |
String |
参数解释: sql程序路径,仅Spark Script和Hive Script作业需要使用此参数。 约束限制: 不涉及 取值范围: 需要满足如下要求:
默认取值: 不涉及 |
hql |
否 |
String |
参数解释: HQL脚本语句。 约束限制: 不涉及 取值范围: 不涉及 默认取值: 不涉及 |
shutdown_cluster |
否 |
Boolean |
参数解释: 作业执行完成后,是否删除集群。 约束限制: 不涉及 取值范围:
默认取值: 不涉及 |
submit_job_once_cluster_run |
是 |
Boolean |
参数解释: 创建集群时是否同时提交作业。此处应设置为true。 约束限制: 不涉及 取值范围:
默认取值: 不涉及 |
file_action |
否 |
String |
参数解释: 数据导入导出。 约束限制: 不涉及 取值范围:
默认取值: 不涉及 |
参数 |
是否必选 |
参数类型 |
描述 |
---|---|---|---|
node_num |
是 |
Integer |
参数解释: Task节点数量。 约束限制: Core与Task节点总数最大为500个。 取值范围: 0-500 默认取值: 不涉及 |
node_size |
是 |
String |
参数解释: Task节点的实例规格,例如:{ECS_FLAVOR_NAME}.linux.bigdata,{ECS_FLAVOR_NAME}可以为c3.4xlare.2等在MRS购买页可见的云服务器规格。实例规格详细说明请参见MRS所使用的弹性云服务器规格和MRS所使用的裸金属服务器规格。该参数建议从MRS控制台的集群创建页面获取对应区域对应版本所支持的规格。 约束限制: 不涉及 取值范围: 不涉及 默认取值: 不涉及 |
data_volume_type |
是 |
String |
参数解释: Task节点数据磁盘存储类别,目前支持SATA、SAS和SSD等。 约束限制: 不涉及 取值范围:
默认取值: 不涉及 |
data_volume_count |
是 |
Integer |
参数解释: Task节点数据磁盘存储数目。 约束限制: 不涉及 取值范围: 0-20 默认取值: 不涉及 |
data_volume_size |
是 |
Integer |
参数解释: Task节点数据磁盘存储大小。传值只需填数字,不需要带单位GB。 约束限制: 不涉及 取值范围: 100-32000 默认取值: 不涉及 |
auto_scaling_policy |
否 |
AutoScalingPolicy object |
参数解释: 弹性伸缩规则信息。 约束限制: 不涉及 取值范围: 不涉及 默认取值: 不涉及 |
参数 |
是否必选 |
参数类型 |
描述 |
---|---|---|---|
name |
是 |
String |
参数解释: 引导操作脚本的名称。 约束限制: 不涉及 取值范围: 同一个集群的引导操作脚本名称不允许相同。不能以空格开头,只能由英文字母、数字以及“_”和“-”组成,且长度为[1-64]个字符。 默认取值: 不涉及 |
uri |
是 |
String |
参数解释: 引导操作脚本的路径。设置为OBS桶的路径或虚拟机本地的路径。
约束限制: 不涉及 取值范围: 不涉及 默认取值: 不涉及 |
parameters |
否 |
String |
参数解释: 引导操作脚本参数。 约束限制: 不涉及 取值范围: 不涉及 默认取值: 不涉及 |
nodes |
是 |
Array of strings |
参数解释: 引导操作脚本所执行的节点类型,包含master、core和task三种类型。 约束限制: 节点类型必须为小写字母。 取值范围: 不涉及 默认取值: 不涉及 |
active_master |
否 |
Boolean |
参数解释: 引导操作脚本是否只运行在主Master节点上。 约束限制: 不涉及 取值范围:
默认取值: false |
fail_action |
是 |
String |
参数解释: 引导操作脚本执行失败后,是否继续执行后续脚本和创建集群。建议您在调试阶段设置为“continue”,无论此引导操作是否执行成功,则集群都能继续安装和启动。 约束限制: 不涉及 取值范围:
默认取值: continue |
before_component_start |
否 |
Boolean |
参数解释: 引导操作脚本执行的时间。目前支持“组件启动前”和“组件启动后”两种类型。 约束限制: 不涉及 取值范围:
默认取值: false |
start_time |
否 |
Long |
参数解释: 单个引导操作脚本的执行时间。 约束限制: 不涉及 取值范围: 不涉及 默认取值: 不涉及 |
state |
否 |
String |
参数解释: 单个引导操作脚本的运行状态。 约束限制: 不涉及 取值范围:
默认取值: 不涉及 |
action_stages |
否 |
Array of strings |
参数解释: 选择引导操作脚本执行的时间。
约束限制: 不涉及 取值范围: 不涉及 默认取值: 不涉及 |
参数 |
是否必选 |
参数类型 |
描述 |
---|---|---|---|
key |
是 |
String |
参数解释: 标签的键。 约束限制: 不涉及 取值范围:
默认取值: 不涉及 |
value |
是 |
String |
参数解释: 标签的值。 约束限制: 不涉及 取值范围:
默认取值: 不涉及 |
参数 |
是否必选 |
参数类型 |
描述 |
---|---|---|---|
group_name |
是 |
String |
参数解释: 节点组名。 约束限制: 不涉及 取值范围:
默认取值: 不涉及 |
node_num |
是 |
Integer |
参数解释: 节点数量。 约束限制: Core与Task节点总数最大为500个。 取值范围: 0-500 默认取值: 不涉及 |
node_size |
是 |
String |
参数解释: 节点的实例规格,例如:{ECS_FLAVOR_NAME}.linux.bigdata,{ECS_FLAVOR_NAME}可以为c3.4xlare.2等在MRS购买页可见的云服务器规格。MRS当前支持主机规格的配型由CPU+内存+Disk共同决定。实例规格详细说明请参见MRS所使用的弹性云服务器规格和MRS所使用的裸金属服务器规格。该参数建议从MRS控制台的集群创建页面获取对应区域对应版本所支持的规格。 约束限制: 不涉及 取值范围: 不涉及 默认取值: 不涉及 |
root_volume_size |
否 |
String |
参数解释: 节点系统磁盘存储大小。 约束限制: 不涉及 取值范围: 不涉及 默认取值: 不涉及 |
root_volume_type |
否 |
String |
参数解释: 节点系统磁盘存储类别,目前支持SATA、SAS和SSD等。 约束限制: 不涉及 取值范围:
默认取值: 不涉及 |
data_volume_type |
否 |
String |
参数解释: 节点数据磁盘存储类别,目前支持SATA、SAS和SSD等。 约束限制: 不涉及 取值范围:
默认取值: 不涉及 |
data_volume_count |
否 |
Integer |
参数解释: 节点数据磁盘存储数目。 约束限制: 不涉及 取值范围: 0-20 默认取值: 不涉及 |
data_volume_size |
否 |
Integer |
参数解释: 节点数据磁盘存储大小。单位为GB。 约束限制: 不涉及 取值范围: 100-32000 默认取值: 不涉及 |
auto_scaling_policy |
否 |
AutoScalingPolicy object |
参数解释: 弹性伸缩规则信息。 约束限制: 当“group_name”配置为“task_node_analysis_group”或“task_node_streaming_group”时该参数有效。 取值范围: 不涉及 默认取值: 不涉及 |
参数 |
是否必选 |
参数类型 |
描述 |
---|---|---|---|
auto_scaling_enable |
是 |
Boolean |
参数解释: 当前自动伸缩规则是否开启。 约束限制: 不涉及 取值范围:
默认取值: 不涉及 |
min_capacity |
是 |
Integer |
参数解释: 指定该节点组的最小保留节点数。 约束限制: 不涉及 取值范围: 0-500 默认取值: 不涉及 |
max_capacity |
是 |
Integer |
参数解释: 指定该节点组的最大节点数。 约束限制: 不涉及 取值范围: 0-500 默认取值: 不涉及 |
resources_plans |
否 |
Array of ResourcesPlan objects |
参数解释: 资源计划列表。若该参数为空表示不启用资源计划。 约束限制: 当启用弹性伸缩时,资源计划与自动伸缩规则需至少配置其中一种。不能超过5条。 取值范围: 不涉及 默认取值: 不涉及 |
rules |
否 |
Array of Rule objects |
参数解释: 自动伸缩的规则列表。 约束限制: 当启用弹性伸缩时,资源计划与自动伸缩规则需至少配置其中一种。不能超过10条。 取值范围: 不涉及 默认取值: 不涉及 |
exec_scripts |
否 |
Array of ScaleScript objects |
参数解释: 弹性伸缩自定义自动化脚本列表。若该参数为空表示不启用自动化脚本。 约束限制: 不能超过10条。 取值范围: 不涉及 默认取值: 不涉及 |
参数 |
是否必选 |
参数类型 |
描述 |
---|---|---|---|
period_type |
是 |
String |
参数解释: 资源计划的周期类型,当前只允许以下类型:daily。 约束限制: 不涉及 取值范围: 不涉及 默认取值: 不涉及 |
start_time |
是 |
String |
参数解释: 资源计划的起始时间。格式为“hour:minute”,表示时间在0:00-23:59之间。 约束限制: 不涉及 取值范围: 不涉及 默认取值: 不涉及 |
end_time |
是 |
String |
参数解释: 资源计划的结束时间,格式与“start_time”相同。 约束限制: 不早于start_time表示的时间,且与start_time间隔不小于30min。 取值范围: 不涉及 默认取值: 不涉及 |
min_capacity |
是 |
Integer |
参数解释: 资源计划内该节点组的最小保留节点数。 约束限制: 不涉及 取值范围: 0-500 默认取值: 不涉及 |
max_capacity |
是 |
Integer |
参数解释: 资源计划内该节点组的最大保留节点数。 约束限制: 不涉及 取值范围: 0-500 默认取值: 不涉及 |
参数 |
是否必选 |
参数类型 |
描述 |
---|---|---|---|
name |
是 |
String |
参数解释: 弹性伸缩规则的名称。 约束限制: 不涉及 取值范围: 只能由英文字母、数字以及“_”和“-”组成,且长度为[1-64]个字符。在一个节点组范围内,不允许重名。 默认取值: 不涉及 |
description |
否 |
String |
参数解释: 弹性伸缩规则的说明。 约束限制: 不涉及 取值范围: 长度小于等于1024个字符。 默认取值: 不涉及 |
adjustment_type |
是 |
String |
参数解释: 弹性伸缩规则的调整类型。 约束限制: 不涉及 取值范围:
默认取值: 不涉及 |
cool_down_minutes |
是 |
Integer |
参数解释: 触发弹性伸缩规则后,该集群处于冷却状态(不再执行弹性伸缩操作)的时长,单位为分钟。 约束限制: 不涉及 取值范围: 0-10080。10080为一周的分钟数。 默认取值: 不涉及 |
scaling_adjustment |
是 |
Integer |
参数解释: 单次调整集群节点的个数。 约束限制: 不涉及 取值范围: 1-100 默认取值: 不涉及 |
trigger |
是 |
Trigger object |
参数解释: 描述该规则触发条件。 约束限制: 不涉及 取值范围: 不涉及 默认取值: 不涉及 |
参数 |
是否必选 |
参数类型 |
描述 |
---|---|---|---|
metric_name |
是 |
String |
参数解释: 指标名称。该触发条件会依据该名称对应指标的值来进行判断。 约束限制: 不涉及 取值范围: 取值范围请参见"弹性伸缩指标列表"。 默认取值: 不涉及 |
metric_value |
是 |
String |
参数解释: 指标阈值。触发该条件的指标阈值,只允许输入整数或者带两位小数的数。 约束限制: 不涉及 取值范围: 只允许输入整数或者带两位小数的数。 默认取值: 不涉及 |
comparison_operator |
否 |
String |
参数解释: 指标判断逻辑运算符。 约束限制: 不涉及 取值范围:
默认取值: 不涉及 |
evaluation_periods |
是 |
Integer |
参数解释: 判断连续满足指标阈值的周期数(一个周期为5分钟)。 约束限制: 不涉及 取值范围: 1-200 默认取值: 不涉及 |
参数 |
是否必选 |
参数类型 |
描述 |
---|---|---|---|
name |
是 |
String |
参数解释: 弹性伸缩自定义自动化脚本的名称。 约束限制: 不涉及 取值范围: 同一个集群的自定义自动化脚本名称不允许相同。只能由英文字母、数字以及“_”和“-”组成,且长度为[1-64]个字符。 默认取值: 不涉及 |
uri |
是 |
String |
参数解释: 自定义自动化脚本的路径。设置为OBS桶的路径或虚拟机本地的路径。
约束限制: 不涉及 取值范围: 不涉及 默认取值: 不涉及 |
parameters |
否 |
String |
参数解释: 自定义自动化脚本参数。多个参数间用空格隔开。可以传入以下系统预定义参数:
其他用户自定义参数使用方式与普通shell脚本相同,多个参数中间用空格隔开。 约束限制: 不涉及 取值范围: 不涉及 默认取值: 不涉及 |
nodes |
是 |
Array of strings |
参数解释: 自定义自动化脚本所执行的节点组名称(非自定义集群也可使用节点类型,包含Master、Core和Task三种类型)。 约束限制: 不涉及 取值范围: 不涉及 默认取值: 不涉及 |
active_master |
否 |
Boolean |
参数解释: 自定义自动化脚本是否只运行在主Master节点上。 约束限制: 不涉及 取值范围:
默认取值: false |
fail_action |
是 |
String |
参数解释: 自定义自动化脚本执行失败后,是否继续执行后续脚本和创建集群。建议您在调试阶段设置为“continue”,无论此自定义自动化脚本是否执行成功,则集群都能继续安装和启动。由于缩容成功无法回滚,因此缩容后执行的脚本“fail_action”必须设置为“continue”。 约束限制: 不涉及 取值范围:
默认取值: continue |
action_stage |
是 |
String |
参数解释: 脚本执行时机。 约束限制: 不涉及 取值范围:
默认取值: 不涉及 |
响应参数
状态码: 200
参数 |
参数类型 |
描述 |
---|---|---|
result |
Boolean |
参数解释: 操作结果。 约束限制: 不涉及 取值范围:
默认取值: 不涉及 |
msg |
String |
参数解释: 系统提示信息,可为空。 约束限制: 不涉及 取值范围: 不涉及 默认取值: 不涉及 |
cluster_id |
String |
参数解释: 集群创建成功后系统返回的集群ID值。 约束限制: 不涉及 取值范围: 不涉及 默认取值: 不涉及 |
请求示例
-
使用node_groups参数组,创建一个启用“集群高可用”功能的集群,集群版本号为MRS 3.1.0。
POST https://{endpoint}/v1.1/{project_id}/run-job-flow { "billing_type" : 12, "data_center" : "", "available_zone_id" : "d573142f24894ef3bd3664de068b44b0", "cluster_name" : "mrs_HEbK", "cluster_version" : "MRS 3.1.0", "safe_mode" : 0, "cluster_type" : 0, "component_list" : [ { "component_name" : "Hadoop" }, { "component_name" : "Spark" }, { "component_name" : "HBase" }, { "component_name" : "Hive" }, { "component_name" : "Presto" }, { "component_name" : "Tez" }, { "component_name" : "Hue" }, { "component_name" : "Loader" }, { "component_name" : "Flink" } ], "vpc" : "vpc-4b1c", "vpc_id" : "4a365717-67be-4f33-80c5-98e98a813af8", "subnet_id" : "67984709-e15e-4e86-9886-d76712d4e00a", "subnet_name" : "subnet-4b44", "security_groups_id" : "4820eace-66ad-4f2c-8d46-cf340e3029dd", "enterprise_project_id" : "0", "tags" : [ { "key" : "key1", "value" : "value1" }, { "key" : "key2", "value" : "value2" } ], "node_groups" : [ { "group_name" : "master_node_default_group", "node_num" : 2, "node_size" : "s3.xlarge.2.linux.bigdata", "root_volume_size" : 480, "root_volume_type" : "SATA", "data_volume_type" : "SATA", "data_volume_count" : 1, "data_volume_size" : 600 }, { "group_name" : "core_node_analysis_group", "node_num" : 3, "node_size" : "s3.xlarge.2.linux.bigdata", "root_volume_size" : 480, "root_volume_type" : "SATA", "data_volume_type" : "SATA", "data_volume_count" : 1, "data_volume_size" : 600 }, { "group_name" : "task_node_analysis_group", "node_num" : 2, "node_size" : "s3.xlarge.2.linux.bigdata", "root_volume_size" : 480, "root_volume_type" : "SATA", "data_volume_type" : "SATA", "data_volume_count" : 0, "data_volume_size" : 600, "auto_scaling_policy" : { "auto_scaling_enable" : true, "min_capacity" : 1, "max_capacity" : "3", "resources_plans" : [ { "period_type" : "daily", "start_time" : "9:50", "end_time" : "10:20", "min_capacity" : 2, "max_capacity" : 3 }, { "period_type" : "daily", "start_time" : "10:20", "end_time" : "12:30", "min_capacity" : 0, "max_capacity" : 2 } ], "exec_scripts" : [ { "name" : "before_scale_out", "uri" : "s3a://XXX/zeppelin_install.sh", "parameters" : "${mrs_scale_node_num} ${mrs_scale_type} xxx", "nodes" : [ "master", "core", "task" ], "active_master" : "true", "action_stage" : "before_scale_out", "fail_action" : "continue" }, { "name" : "after_scale_out", "uri" : "s3a://XXX/storm_rebalance.sh", "parameters" : "${mrs_scale_node_hostnames} ${mrs_scale_node_ips}", "nodes" : [ "master", "core", "task" ], "active_master" : "true", "action_stage" : "after_scale_out", "fail_action" : "continue" } ], "rules" : [ { "name" : "default-expand-1", "adjustment_type" : "scale_out", "cool_down_minutes" : 5, "scaling_adjustment" : 1, "trigger" : { "metric_name" : "YARNMemoryAvailablePercentage", "metric_value" : "25", "comparison_operator" : "LT", "evaluation_periods" : 10 } }, { "name" : "default-shrink-1", "adjustment_type" : "scale_in", "cool_down_minutes" : 5, "scaling_adjustment" : 1, "trigger" : { "metric_name" : "YARNMemoryAvailablePercentage", "metric_value" : "70", "comparison_operator" : "GT", "evaluation_periods" : 10 } } ] } } ], "login_mode" : 1, "cluster_master_secret" : "", "cluster_admin_secret" : "", "log_collection" : 1, "add_jobs" : [ { "job_type" : 1, "job_name" : "tenji111", "jar_path" : "s3a://bigdata/program/hadoop-mapreduce-examples-2.7.2.jar", "arguments" : "wordcount", "input" : "s3a://bigdata/input/wd_1k/", "output" : "s3a://bigdata/ouput/", "job_log" : "s3a://bigdata/log/", "shutdown_cluster" : true, "file_action" : "", "submit_job_once_cluster_run" : true, "hql" : "", "hive_script_path" : "" } ], "bootstrap_scripts" : [ { "name" : "Modify os config", "uri" : "s3a://XXX/modify_os_config.sh", "parameters" : "param1 param2", "nodes" : [ "master", "core", "task" ], "active_master" : "false", "before_component_start" : "true", "start_time" : "1667892101", "state" : "IN_PROGRESS", "fail_action" : "continue", "action_stages" : [ "BEFORE_COMPONENT_FIRST_START", "BEFORE_SCALE_IN" ] }, { "name" : "Install zepplin", "uri" : "s3a://XXX/zeppelin_install.sh", "parameters" : "", "nodes" : [ "master" ], "active_master" : "true", "before_component_start" : "false", "start_time" : "1667892101", "state" : "IN_PROGRESS", "fail_action" : "continue", "action_stages" : [ "AFTER_SCALE_IN", "AFTER_SCALE_OUT" ] } ] }
-
不使用node_groups参数组,创建一个启用“集群高可用”功能的集群,集群版本号为MRS 3.1.0。
POST https://{endpoint}/v1.1/{project_id}/run-job-flow { "billing_type" : 12, "data_center" : "", "master_node_num" : 2, "master_node_size" : "s3.2xlarge.2.linux.bigdata", "core_node_num" : 3, "core_node_size" : "s1.xlarge.linux.bigdata", "available_zone_id" : "d573142f24894ef3bd3664de068b44b0", "cluster_name" : "newcluster", "vpc" : "vpc1", "vpc_id" : "5b7db34d-3534-4a6e-ac94-023cd36aaf74", "subnet_id" : "815bece0-fd22-4b65-8a6e-15788c99ee43", "subnet_name" : "subnet", "security_groups_id" : "845bece1-fd22-4b45-7a6e-14338c99ee43", "tags" : [ { "key" : "key1", "value" : "value1" }, { "key" : "key2", "value" : "value2" } ], "cluster_version" : "MRS 3.1.0", "cluster_type" : 0, "master_data_volume_type" : "SATA", "master_data_volume_size" : 600, "master_data_volume_count" : 1, "core_data_volume_type" : "SATA", "core_data_volume_size" : 600, "core_data_volume_count" : 2, "node_public_cert_name" : "SSHkey-bba1", "safe_mode" : 0, "log_collection" : 1, "task_node_groups" : [ { "node_num" : 2, "node_size" : "s3.xlarge.2.linux.bigdata", "data_volume_type" : "SATA", "data_volume_count" : 1, "data_volume_size" : 600, "auto_scaling_policy" : { "auto_scaling_enable" : true, "min_capacity" : 1, "max_capacity" : "3", "resources_plans" : [ { "period_type" : "daily", "start_time" : "9: 50", "end_time" : "10: 20", "min_capacity" : 2, "max_capacity" : 3 }, { "period_type" : "daily", "start_time" : "10: 20", "end_time" : "12: 30", "min_capacity" : 0, "max_capacity" : 2 } ], "exec_scripts" : [ { "name" : "before_scale_out", "uri" : "s3a: //XXX/zeppelin_install.sh", "parameters" : "${mrs_scale_node_num}${mrs_scale_type}xxx", "nodes" : [ "master", "core", "task" ], "active_master" : "true", "action_stage" : "before_scale_out", "fail_action" : "continue" }, { "name" : "after_scale_out", "uri" : "s3a: //XXX/storm_rebalance.sh", "parameters" : "${mrs_scale_node_hostnames}${mrs_scale_node_ips}", "nodes" : [ "master", "core", "task" ], "active_master" : "true", "action_stage" : "after_scale_out", "fail_action" : "continue" } ], "rules" : [ { "name" : "default-expand-1", "adjustment_type" : "scale_out", "cool_down_minutes" : 5, "scaling_adjustment" : 1, "trigger" : { "metric_name" : "YARNMemoryAvailablePercentage", "metric_value" : "25", "comparison_operator" : "LT", "evaluation_periods" : 10 } }, { "name" : "default-shrink-1", "adjustment_type" : "scale_in", "cool_down_minutes" : 5, "scaling_adjustment" : 1, "trigger" : { "metric_name" : "YARNMemoryAvailablePercentage", "metric_value" : "70", "comparison_operator" : "GT", "evaluation_periods" : 10 } } ] } } ], "component_list" : [ { "component_name" : "Hadoop" }, { "component_name" : "Spark" }, { "component_name" : "HBase" }, { "component_name" : "Hive" } ], "add_jobs" : [ { "job_type" : 1, "job_name" : "tenji111", "jar_path" : "s3a: //bigdata/program/hadoop-mapreduce-examples-2.7.2.jar", "arguments" : "wordcount", "input" : "s3a: //bigdata/input/wd_1k/", "output" : "s3a: //bigdata/ouput/", "job_log" : "s3a: //bigdata/log/", "shutdown_cluster" : true, "file_action" : "", "submit_job_once_cluster_run" : true, "hql" : "", "hive_script_path" : "" } ], "bootstrap_scripts" : [ { "name" : "Modifyosconfig", "uri" : "s3a: //XXX/modify_os_config.sh", "parameters" : "param1param2", "nodes" : [ "master", "core", "task" ], "active_master" : "false", "before_component_start" : "true", "start_time" : "1667892101", "state" : "IN_PROGRESS", "fail_action" : "continue", "action_stages" : [ "BEFORE_COMPONENT_FIRST_START", "BEFORE_SCALE_IN" ] }, { "name" : "Installzepplin", "uri" : "s3a: //XXX/zeppelin_install.sh", "parameters" : "", "nodes" : [ "master" ], "active_master" : "true", "before_component_start" : "false", "start_time" : "1667892101", "state" : "IN_PROGRESS", "fail_action" : "continue", "action_stages" : [ "AFTER_SCALE_IN", "AFTER_SCALE_OUT" ] } ] }
-
使用node_groups参数组,创建一个关闭“集群高可用”功能、最小规格的集群,集群版本号为MRS 3.1.0。
POST https://{endpoint}/v1.1/{project_id}/run-job-flow { "billing_type" : 12, "data_center" : "", "available_zone_id" : "d573142f24894ef3bd3664de068b44b0", "cluster_name" : "mrs_HEbK", "cluster_version" : "MRS 3.1.0", "safe_mode" : 0, "cluster_type" : 0, "component_list" : [ { "component_name" : "Hadoop" }, { "component_name" : "Spark" }, { "component_name" : "HBase" }, { "component_name" : "Hive" }, { "component_name" : "Presto" }, { "component_name" : "Tez" }, { "component_name" : "Hue" }, { "component_name" : "Loader" }, { "component_name" : "Flink" } ], "vpc" : "vpc-4b1c", "vpc_id" : "4a365717-67be-4f33-80c5-98e98a813af8", "subnet_id" : "67984709-e15e-4e86-9886-d76712d4e00a", "subnet_name" : "subnet-4b44", "security_groups_id" : "4820eace-66ad-4f2c-8d46-cf340e3029dd", "enterprise_project_id" : "0", "tags" : [ { "key" : "key1", "value" : "value1" }, { "key" : "key2", "value" : "value2" } ], "node_groups" : [ { "group_name" : "master_node_default_group", "node_num" : 1, "node_size" : "s3.xlarge.2.linux.bigdata", "root_volume_size" : 480, "root_volume_type" : "SATA", "data_volume_type" : "SATA", "data_volume_count" : 1, "data_volume_size" : 600 }, { "group_name" : "core_node_analysis_group", "node_num" : 1, "node_size" : "s3.xlarge.2.linux.bigdata", "root_volume_size" : 480, "root_volume_type" : "SATA", "data_volume_type" : "SATA", "data_volume_count" : 1, "data_volume_size" : 600 } ], "login_mode" : 1, "cluster_master_secret" : "", "cluster_admin_secret" : "", "log_collection" : 1, "add_jobs" : [ { "job_type" : 1, "job_name" : "tenji111", "jar_path" : "s3a://bigdata/program/hadoop-mapreduce-examples-2.7.2.jar", "arguments" : "wordcount", "input" : "s3a://bigdata/input/wd_1k/", "output" : "s3a://bigdata/ouput/", "job_log" : "s3a://bigdata/log/", "shutdown_cluster" : true, "file_action" : "", "submit_job_once_cluster_run" : true, "hql" : "", "hive_script_path" : "" } ], "bootstrap_scripts" : [ { "name" : "Modify os config", "uri" : "s3a://XXX/modify_os_config.sh", "parameters" : "param1 param2", "nodes" : [ "master", "core", "task" ], "active_master" : "false", "before_component_start" : "true", "start_time" : "1667892101", "state" : "IN_PROGRESS", "fail_action" : "continue", "action_stages" : [ "BEFORE_COMPONENT_FIRST_START", "BEFORE_SCALE_IN" ] }, { "name" : "Install zepplin", "uri" : "s3a://XXX/zeppelin_install.sh", "parameters" : "", "nodes" : [ "master" ], "active_master" : "true", "before_component_start" : "false", "start_time" : "1667892101", "state" : "IN_PROGRESS", "fail_action" : "continue", "action_stages" : [ "AFTER_SCALE_IN", "AFTER_SCALE_OUT" ] } ] }
-
不使用node_groups参数组,创建一个关闭“集群高可用”功能、最小规格的集群,集群版本号为MRS 3.1.0。
POST https://{endpoint}/v1.1/{project_id}/run-job-flow { "billing_type" : 12, "data_center" : "", "master_node_num" : 1, "master_node_size" : "s3.2xlarge.2.linux.bigdata", "core_node_num" : 1, "core_node_size" : "s1.xlarge.linux.bigdata", "available_zone_id" : "d573142f24894ef3bd3664de068b44b0", "cluster_name" : "newcluster", "vpc" : "vpc1", "vpc_id" : "5b7db34d-3534-4a6e-ac94-023cd36aaf74", "subnet_id" : "815bece0-fd22-4b65-8a6e-15788c99ee43", "subnet_name" : "subnet", "security_groups_id" : "", "enterprise_project_id" : "0", "tags" : [ { "key" : "key1", "value" : "value1" }, { "key" : "key2", "value" : "value2" } ], "cluster_version" : "MRS 3.1.0", "cluster_type" : 0, "master_data_volume_type" : "SATA", "master_data_volume_size" : 600, "master_data_volume_count" : 1, "core_data_volume_type" : "SATA", "core_data_volume_size" : 600, "core_data_volume_count" : 1, "login_mode" : 1, "node_public_cert_name" : "SSHkey-bba1", "safe_mode" : 0, "cluster_admin_secret" : "******", "log_collection" : 1, "component_list" : [ { "component_name" : "Hadoop" }, { "component_name" : "Spark" }, { "component_name" : "HBase" }, { "component_name" : "Hive" }, { "component_name" : "Presto" }, { "component_name" : "Tez" }, { "component_name" : "Hue" }, { "component_name" : "Loader" }, { "component_name" : "Flink" } ], "add_jobs" : [ { "job_type" : 1, "job_name" : "tenji111", "jar_path" : "s3a://bigdata/program/hadoop-mapreduce-examples-XXX.jar", "arguments" : "wordcount", "input" : "s3a://bigdata/input/wd_1k/", "output" : "s3a://bigdata/ouput/", "job_log" : "s3a://bigdata/log/", "shutdown_cluster" : false, "file_action" : "", "submit_job_once_cluster_run" : true, "hql" : "", "hive_script_path" : "" } ], "bootstrap_scripts" : [ { "name" : "Install zepplin", "uri" : "s3a://XXX/zeppelin_install.sh", "parameters" : "", "nodes" : [ "master" ], "active_master" : "false", "before_component_start" : "false", "start_time" : "1667892101", "state" : "IN_PROGRESS", "fail_action" : "continue", "action_stages" : [ "AFTER_SCALE_IN", "AFTER_SCALE_OUT" ] } ] }
响应示例
状态码: 200
创建集群成功。
{ "cluster_id" : "da1592c2-bb7e-468d-9ac9-83246e95447a", "result" : true, "msg" : "" }
SDK代码示例
SDK代码示例如下。
-
使用node_groups参数组,创建一个启用“集群高可用”功能的集群,集群版本号为MRS 3.1.0。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291
package com.huaweicloud.sdk.test; import com.huaweicloud.sdk.core.auth.ICredential; import com.huaweicloud.sdk.core.auth.BasicCredentials; import com.huaweicloud.sdk.core.exception.ConnectionException; import com.huaweicloud.sdk.core.exception.RequestTimeoutException; import com.huaweicloud.sdk.core.exception.ServiceResponseException; import com.huaweicloud.sdk.mrs.v1.region.MrsRegion; import com.huaweicloud.sdk.mrs.v1.*; import com.huaweicloud.sdk.mrs.v1.model.*; import java.util.List; import java.util.ArrayList; public class CreateClusterSolution { public static void main(String[] args) { // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security. // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment String ak = System.getenv("CLOUD_SDK_AK"); String sk = System.getenv("CLOUD_SDK_SK"); String projectId = "{project_id}"; ICredential auth = new BasicCredentials() .withProjectId(projectId) .withAk(ak) .withSk(sk); MrsClient client = MrsClient.newBuilder() .withCredential(auth) .withRegion(MrsRegion.valueOf("<YOUR REGION>")) .build(); CreateClusterRequest request = new CreateClusterRequest(); CreateClusterReqV11 body = new CreateClusterReqV11(); List<String> listExecScriptsNodes = new ArrayList<>(); listExecScriptsNodes.add("master"); listExecScriptsNodes.add("core"); listExecScriptsNodes.add("task"); List<String> listExecScriptsNodes1 = new ArrayList<>(); listExecScriptsNodes1.add("master"); listExecScriptsNodes1.add("core"); listExecScriptsNodes1.add("task"); List<ScaleScript> listAutoScalingPolicyExecScripts = new ArrayList<>(); listAutoScalingPolicyExecScripts.add( new ScaleScript() .withName("before_scale_out") .withUri("s3a://XXX/zeppelin_install.sh") .withParameters("${mrs_scale_node_num} ${mrs_scale_type} xxx") .withNodes(listExecScriptsNodes1) .withActiveMaster(true) .withFailAction(ScaleScript.FailActionEnum.fromValue("continue")) .withActionStage(ScaleScript.ActionStageEnum.fromValue("before_scale_out")) ); listAutoScalingPolicyExecScripts.add( new ScaleScript() .withName("after_scale_out") .withUri("s3a://XXX/storm_rebalance.sh") .withParameters("${mrs_scale_node_hostnames} ${mrs_scale_node_ips}") .withNodes(listExecScriptsNodes) .withActiveMaster(true) .withFailAction(ScaleScript.FailActionEnum.fromValue("continue")) .withActionStage(ScaleScript.ActionStageEnum.fromValue("after_scale_out")) ); Trigger triggerRules = new Trigger(); triggerRules.withMetricName("YARNMemoryAvailablePercentage") .withMetricValue("70") .withComparisonOperator("GT") .withEvaluationPeriods(10); Trigger triggerRules1 = new Trigger(); triggerRules1.withMetricName("YARNMemoryAvailablePercentage") .withMetricValue("25") .withComparisonOperator("LT") .withEvaluationPeriods(10); List<Rule> listAutoScalingPolicyRules = new ArrayList<>(); listAutoScalingPolicyRules.add( new Rule() .withName("default-expand-1") .withAdjustmentType(Rule.AdjustmentTypeEnum.fromValue("scale_out")) .withCoolDownMinutes(5) .withScalingAdjustment(1) .withTrigger(triggerRules1) ); listAutoScalingPolicyRules.add( new Rule() .withName("default-shrink-1") .withAdjustmentType(Rule.AdjustmentTypeEnum.fromValue("scale_in")) .withCoolDownMinutes(5) .withScalingAdjustment(1) .withTrigger(triggerRules) ); List<ResourcesPlan> listAutoScalingPolicyResourcesPlans = new ArrayList<>(); listAutoScalingPolicyResourcesPlans.add( new ResourcesPlan() .withPeriodType("daily") .withStartTime("9:50") .withEndTime("10:20") .withMinCapacity(2) .withMaxCapacity(3) ); listAutoScalingPolicyResourcesPlans.add( new ResourcesPlan() .withPeriodType("daily") .withStartTime("10:20") .withEndTime("12:30") .withMinCapacity(0) .withMaxCapacity(2) ); AutoScalingPolicy autoScalingPolicyNodeGroups = new AutoScalingPolicy(); autoScalingPolicyNodeGroups.withAutoScalingEnable(true) .withMinCapacity(1) .withMaxCapacity(3) .withResourcesPlans(listAutoScalingPolicyResourcesPlans) .withRules(listAutoScalingPolicyRules) .withExecScripts(listAutoScalingPolicyExecScripts); List<NodeGroupV11> listbodyNodeGroups = new ArrayList<>(); listbodyNodeGroups.add( new NodeGroupV11() .withGroupName("master_node_default_group") .withNodeNum(2) .withNodeSize("s3.xlarge.2.linux.bigdata") .withRootVolumeSize("480") .withRootVolumeType("SATA") .withDataVolumeType("SATA") .withDataVolumeCount(1) .withDataVolumeSize(600) ); listbodyNodeGroups.add( new NodeGroupV11() .withGroupName("core_node_analysis_group") .withNodeNum(3) .withNodeSize("s3.xlarge.2.linux.bigdata") .withRootVolumeSize("480") .withRootVolumeType("SATA") .withDataVolumeType("SATA") .withDataVolumeCount(1) .withDataVolumeSize(600) ); listbodyNodeGroups.add( new NodeGroupV11() .withGroupName("task_node_analysis_group") .withNodeNum(2) .withNodeSize("s3.xlarge.2.linux.bigdata") .withRootVolumeSize("480") .withRootVolumeType("SATA") .withDataVolumeType("SATA") .withDataVolumeCount(0) .withDataVolumeSize(600) .withAutoScalingPolicy(autoScalingPolicyNodeGroups) ); List<Tag> listbodyTags = new ArrayList<>(); listbodyTags.add( new Tag() .withKey("key1") .withValue("value1") ); listbodyTags.add( new Tag() .withKey("key2") .withValue("value2") ); List<BootstrapScript.ActionStagesEnum> listBootstrapScriptsActionStages = new ArrayList<>(); listBootstrapScriptsActionStages.add(BootstrapScript.ActionStagesEnum.fromValue("AFTER_SCALE_IN")); listBootstrapScriptsActionStages.add(BootstrapScript.ActionStagesEnum.fromValue("AFTER_SCALE_OUT")); List<String> listBootstrapScriptsNodes = new ArrayList<>(); listBootstrapScriptsNodes.add("master"); List<BootstrapScript.ActionStagesEnum> listBootstrapScriptsActionStages1 = new ArrayList<>(); listBootstrapScriptsActionStages1.add(BootstrapScript.ActionStagesEnum.fromValue("BEFORE_COMPONENT_FIRST_START")); listBootstrapScriptsActionStages1.add(BootstrapScript.ActionStagesEnum.fromValue("BEFORE_SCALE_IN")); List<String> listBootstrapScriptsNodes1 = new ArrayList<>(); listBootstrapScriptsNodes1.add("master"); listBootstrapScriptsNodes1.add("core"); listBootstrapScriptsNodes1.add("task"); List<BootstrapScript> listbodyBootstrapScripts = new ArrayList<>(); listbodyBootstrapScripts.add( new BootstrapScript() .withName("Modify os config") .withUri("s3a://XXX/modify_os_config.sh") .withParameters("param1 param2") .withNodes(listBootstrapScriptsNodes1) .withActiveMaster(false) .withFailAction(BootstrapScript.FailActionEnum.fromValue("continue")) .withBeforeComponentStart(true) .withStartTime(1667892101L) .withState(BootstrapScript.StateEnum.fromValue("IN_PROGRESS")) .withActionStages(listBootstrapScriptsActionStages1) ); listbodyBootstrapScripts.add( new BootstrapScript() .withName("Install zepplin") .withUri("s3a://XXX/zeppelin_install.sh") .withParameters("") .withNodes(listBootstrapScriptsNodes) .withActiveMaster(true) .withFailAction(BootstrapScript.FailActionEnum.fromValue("continue")) .withBeforeComponentStart(false) .withStartTime(1667892101L) .withState(BootstrapScript.StateEnum.fromValue("IN_PROGRESS")) .withActionStages(listBootstrapScriptsActionStages) ); List<AddJobsReqV11> listbodyAddJobs = new ArrayList<>(); listbodyAddJobs.add( new AddJobsReqV11() .withJobType(1) .withJobName("tenji111") .withJarPath("s3a://bigdata/program/hadoop-mapreduce-examples-2.7.2.jar") .withArguments("wordcount") .withInput("s3a://bigdata/input/wd_1k/") .withOutput("s3a://bigdata/ouput/") .withJobLog("s3a://bigdata/log/") .withHiveScriptPath("") .withHql("") .withShutdownCluster(true) .withSubmitJobOnceClusterRun(true) .withFileAction("") ); List<ComponentAmbV11> listbodyComponentList = new ArrayList<>(); listbodyComponentList.add( new ComponentAmbV11() .withComponentName("Hadoop") ); listbodyComponentList.add( new ComponentAmbV11() .withComponentName("Spark") ); listbodyComponentList.add( new ComponentAmbV11() .withComponentName("HBase") ); listbodyComponentList.add( new ComponentAmbV11() .withComponentName("Hive") ); listbodyComponentList.add( new ComponentAmbV11() .withComponentName("Presto") ); listbodyComponentList.add( new ComponentAmbV11() .withComponentName("Tez") ); listbodyComponentList.add( new ComponentAmbV11() .withComponentName("Hue") ); listbodyComponentList.add( new ComponentAmbV11() .withComponentName("Loader") ); listbodyComponentList.add( new ComponentAmbV11() .withComponentName("Flink") ); body.withNodeGroups(listbodyNodeGroups); body.withLoginMode(CreateClusterReqV11.LoginModeEnum.NUMBER_1); body.withTags(listbodyTags); body.withEnterpriseProjectId("0"); body.withLogCollection(CreateClusterReqV11.LogCollectionEnum.NUMBER_1); body.withClusterType(CreateClusterReqV11.ClusterTypeEnum.NUMBER_0); body.withSafeMode(CreateClusterReqV11.SafeModeEnum.NUMBER_0); body.withClusterMasterSecret(""); body.withClusterAdminSecret(""); body.withBootstrapScripts(listbodyBootstrapScripts); body.withAddJobs(listbodyAddJobs); body.withSecurityGroupsId("4820eace-66ad-4f2c-8d46-cf340e3029dd"); body.withSubnetName("subnet-4b44"); body.withSubnetId("67984709-e15e-4e86-9886-d76712d4e00a"); body.withVpcId("4a365717-67be-4f33-80c5-98e98a813af8"); body.withAvailableZoneId("d573142f24894ef3bd3664de068b44b0"); body.withComponentList(listbodyComponentList); body.withVpc("vpc-4b1c"); body.withDataCenter(""); body.withBillingType(CreateClusterReqV11.BillingTypeEnum.NUMBER_12); body.withClusterName("mrs_HEbK"); body.withClusterVersion("MRS 3.1.0"); request.withBody(body); try { CreateClusterResponse response = client.createCluster(request); System.out.println(response.toString()); } catch (ConnectionException e) { e.printStackTrace(); } catch (RequestTimeoutException e) { e.printStackTrace(); } catch (ServiceResponseException e) { e.printStackTrace(); System.out.println(e.getHttpStatusCode()); System.out.println(e.getRequestId()); System.out.println(e.getErrorCode()); System.out.println(e.getErrorMsg()); } } }
-
不使用node_groups参数组,创建一个启用“集群高可用”功能的集群,集群版本号为MRS 3.1.0。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253
package com.huaweicloud.sdk.test; import com.huaweicloud.sdk.core.auth.ICredential; import com.huaweicloud.sdk.core.auth.BasicCredentials; import com.huaweicloud.sdk.core.exception.ConnectionException; import com.huaweicloud.sdk.core.exception.RequestTimeoutException; import com.huaweicloud.sdk.core.exception.ServiceResponseException; import com.huaweicloud.sdk.mrs.v1.region.MrsRegion; import com.huaweicloud.sdk.mrs.v1.*; import com.huaweicloud.sdk.mrs.v1.model.*; import java.util.List; import java.util.ArrayList; public class CreateClusterSolution { public static void main(String[] args) { // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security. // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment String ak = System.getenv("CLOUD_SDK_AK"); String sk = System.getenv("CLOUD_SDK_SK"); String projectId = "{project_id}"; ICredential auth = new BasicCredentials() .withProjectId(projectId) .withAk(ak) .withSk(sk); MrsClient client = MrsClient.newBuilder() .withCredential(auth) .withRegion(MrsRegion.valueOf("<YOUR REGION>")) .build(); CreateClusterRequest request = new CreateClusterRequest(); CreateClusterReqV11 body = new CreateClusterReqV11(); List<Tag> listbodyTags = new ArrayList<>(); listbodyTags.add( new Tag() .withKey("key1") .withValue("value1") ); listbodyTags.add( new Tag() .withKey("key2") .withValue("value2") ); List<BootstrapScript.ActionStagesEnum> listBootstrapScriptsActionStages = new ArrayList<>(); listBootstrapScriptsActionStages.add(BootstrapScript.ActionStagesEnum.fromValue("AFTER_SCALE_IN")); listBootstrapScriptsActionStages.add(BootstrapScript.ActionStagesEnum.fromValue("AFTER_SCALE_OUT")); List<String> listBootstrapScriptsNodes = new ArrayList<>(); listBootstrapScriptsNodes.add("master"); List<BootstrapScript.ActionStagesEnum> listBootstrapScriptsActionStages1 = new ArrayList<>(); listBootstrapScriptsActionStages1.add(BootstrapScript.ActionStagesEnum.fromValue("BEFORE_COMPONENT_FIRST_START")); listBootstrapScriptsActionStages1.add(BootstrapScript.ActionStagesEnum.fromValue("BEFORE_SCALE_IN")); List<String> listBootstrapScriptsNodes1 = new ArrayList<>(); listBootstrapScriptsNodes1.add("master"); listBootstrapScriptsNodes1.add("core"); listBootstrapScriptsNodes1.add("task"); List<BootstrapScript> listbodyBootstrapScripts = new ArrayList<>(); listbodyBootstrapScripts.add( new BootstrapScript() .withName("Modifyosconfig") .withUri("s3a: //XXX/modify_os_config.sh") .withParameters("param1param2") .withNodes(listBootstrapScriptsNodes1) .withActiveMaster(false) .withFailAction(BootstrapScript.FailActionEnum.fromValue("continue")) .withBeforeComponentStart(true) .withStartTime(1667892101L) .withState(BootstrapScript.StateEnum.fromValue("IN_PROGRESS")) .withActionStages(listBootstrapScriptsActionStages1) ); listbodyBootstrapScripts.add( new BootstrapScript() .withName("Installzepplin") .withUri("s3a: //XXX/zeppelin_install.sh") .withParameters("") .withNodes(listBootstrapScriptsNodes) .withActiveMaster(true) .withFailAction(BootstrapScript.FailActionEnum.fromValue("continue")) .withBeforeComponentStart(false) .withStartTime(1667892101L) .withState(BootstrapScript.StateEnum.fromValue("IN_PROGRESS")) .withActionStages(listBootstrapScriptsActionStages) ); List<String> listExecScriptsNodes = new ArrayList<>(); listExecScriptsNodes.add("master"); listExecScriptsNodes.add("core"); listExecScriptsNodes.add("task"); List<String> listExecScriptsNodes1 = new ArrayList<>(); listExecScriptsNodes1.add("master"); listExecScriptsNodes1.add("core"); listExecScriptsNodes1.add("task"); List<ScaleScript> listAutoScalingPolicyExecScripts = new ArrayList<>(); listAutoScalingPolicyExecScripts.add( new ScaleScript() .withName("before_scale_out") .withUri("s3a: //XXX/zeppelin_install.sh") .withParameters("${mrs_scale_node_num}${mrs_scale_type}xxx") .withNodes(listExecScriptsNodes1) .withActiveMaster(true) .withFailAction(ScaleScript.FailActionEnum.fromValue("continue")) .withActionStage(ScaleScript.ActionStageEnum.fromValue("before_scale_out")) ); listAutoScalingPolicyExecScripts.add( new ScaleScript() .withName("after_scale_out") .withUri("s3a: //XXX/storm_rebalance.sh") .withParameters("${mrs_scale_node_hostnames}${mrs_scale_node_ips}") .withNodes(listExecScriptsNodes) .withActiveMaster(true) .withFailAction(ScaleScript.FailActionEnum.fromValue("continue")) .withActionStage(ScaleScript.ActionStageEnum.fromValue("after_scale_out")) ); Trigger triggerRules = new Trigger(); triggerRules.withMetricName("YARNMemoryAvailablePercentage") .withMetricValue("70") .withComparisonOperator("GT") .withEvaluationPeriods(10); Trigger triggerRules1 = new Trigger(); triggerRules1.withMetricName("YARNMemoryAvailablePercentage") .withMetricValue("25") .withComparisonOperator("LT") .withEvaluationPeriods(10); List<Rule> listAutoScalingPolicyRules = new ArrayList<>(); listAutoScalingPolicyRules.add( new Rule() .withName("default-expand-1") .withAdjustmentType(Rule.AdjustmentTypeEnum.fromValue("scale_out")) .withCoolDownMinutes(5) .withScalingAdjustment(1) .withTrigger(triggerRules1) ); listAutoScalingPolicyRules.add( new Rule() .withName("default-shrink-1") .withAdjustmentType(Rule.AdjustmentTypeEnum.fromValue("scale_in")) .withCoolDownMinutes(5) .withScalingAdjustment(1) .withTrigger(triggerRules) ); List<ResourcesPlan> listAutoScalingPolicyResourcesPlans = new ArrayList<>(); listAutoScalingPolicyResourcesPlans.add( new ResourcesPlan() .withPeriodType("daily") .withStartTime("9: 50") .withEndTime("10: 20") .withMinCapacity(2) .withMaxCapacity(3) ); listAutoScalingPolicyResourcesPlans.add( new ResourcesPlan() .withPeriodType("daily") .withStartTime("10: 20") .withEndTime("12: 30") .withMinCapacity(0) .withMaxCapacity(2) ); AutoScalingPolicy autoScalingPolicyTaskNodeGroups = new AutoScalingPolicy(); autoScalingPolicyTaskNodeGroups.withAutoScalingEnable(true) .withMinCapacity(1) .withMaxCapacity(3) .withResourcesPlans(listAutoScalingPolicyResourcesPlans) .withRules(listAutoScalingPolicyRules) .withExecScripts(listAutoScalingPolicyExecScripts); List<TaskNodeGroup> listbodyTaskNodeGroups = new ArrayList<>(); listbodyTaskNodeGroups.add( new TaskNodeGroup() .withNodeNum(2) .withNodeSize("s3.xlarge.2.linux.bigdata") .withDataVolumeType(TaskNodeGroup.DataVolumeTypeEnum.fromValue("SATA")) .withDataVolumeCount(1) .withDataVolumeSize(600) .withAutoScalingPolicy(autoScalingPolicyTaskNodeGroups) ); List<AddJobsReqV11> listbodyAddJobs = new ArrayList<>(); listbodyAddJobs.add( new AddJobsReqV11() .withJobType(1) .withJobName("tenji111") .withJarPath("s3a: //bigdata/program/hadoop-mapreduce-examples-2.7.2.jar") .withArguments("wordcount") .withInput("s3a: //bigdata/input/wd_1k/") .withOutput("s3a: //bigdata/ouput/") .withJobLog("s3a: //bigdata/log/") .withHiveScriptPath("") .withHql("") .withShutdownCluster(true) .withSubmitJobOnceClusterRun(true) .withFileAction("") ); List<ComponentAmbV11> listbodyComponentList = new ArrayList<>(); listbodyComponentList.add( new ComponentAmbV11() .withComponentName("Hadoop") ); listbodyComponentList.add( new ComponentAmbV11() .withComponentName("Spark") ); listbodyComponentList.add( new ComponentAmbV11() .withComponentName("HBase") ); listbodyComponentList.add( new ComponentAmbV11() .withComponentName("Hive") ); body.withTags(listbodyTags); body.withLogCollection(CreateClusterReqV11.LogCollectionEnum.NUMBER_1); body.withClusterType(CreateClusterReqV11.ClusterTypeEnum.NUMBER_0); body.withSafeMode(CreateClusterReqV11.SafeModeEnum.NUMBER_0); body.withNodePublicCertName("SSHkey-bba1"); body.withBootstrapScripts(listbodyBootstrapScripts); body.withTaskNodeGroups(listbodyTaskNodeGroups); body.withCoreDataVolumeCount(2); body.withCoreDataVolumeSize(600); body.withCoreDataVolumeType(CreateClusterReqV11.CoreDataVolumeTypeEnum.fromValue("SATA")); body.withMasterDataVolumeCount(CreateClusterReqV11.MasterDataVolumeCountEnum.NUMBER_1); body.withMasterDataVolumeSize(600); body.withMasterDataVolumeType(CreateClusterReqV11.MasterDataVolumeTypeEnum.fromValue("SATA")); body.withAddJobs(listbodyAddJobs); body.withSecurityGroupsId("845bece1-fd22-4b45-7a6e-14338c99ee43"); body.withSubnetName("subnet"); body.withSubnetId("815bece0-fd22-4b65-8a6e-15788c99ee43"); body.withVpcId("5b7db34d-3534-4a6e-ac94-023cd36aaf74"); body.withAvailableZoneId("d573142f24894ef3bd3664de068b44b0"); body.withComponentList(listbodyComponentList); body.withCoreNodeSize("s1.xlarge.linux.bigdata"); body.withMasterNodeSize("s3.2xlarge.2.linux.bigdata"); body.withVpc("vpc1"); body.withDataCenter(""); body.withBillingType(CreateClusterReqV11.BillingTypeEnum.NUMBER_12); body.withCoreNodeNum(3); body.withMasterNodeNum(2); body.withClusterName("newcluster"); body.withClusterVersion("MRS 3.1.0"); request.withBody(body); try { CreateClusterResponse response = client.createCluster(request); System.out.println(response.toString()); } catch (ConnectionException e) { e.printStackTrace(); } catch (RequestTimeoutException e) { e.printStackTrace(); } catch (ServiceResponseException e) { e.printStackTrace(); System.out.println(e.getHttpStatusCode()); System.out.println(e.getRequestId()); System.out.println(e.getErrorCode()); System.out.println(e.getErrorMsg()); } } }
-
使用node_groups参数组,创建一个关闭“集群高可用”功能、最小规格的集群,集群版本号为MRS 3.1.0。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199
package com.huaweicloud.sdk.test; import com.huaweicloud.sdk.core.auth.ICredential; import com.huaweicloud.sdk.core.auth.BasicCredentials; import com.huaweicloud.sdk.core.exception.ConnectionException; import com.huaweicloud.sdk.core.exception.RequestTimeoutException; import com.huaweicloud.sdk.core.exception.ServiceResponseException; import com.huaweicloud.sdk.mrs.v1.region.MrsRegion; import com.huaweicloud.sdk.mrs.v1.*; import com.huaweicloud.sdk.mrs.v1.model.*; import java.util.List; import java.util.ArrayList; public class CreateClusterSolution { public static void main(String[] args) { // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security. // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment String ak = System.getenv("CLOUD_SDK_AK"); String sk = System.getenv("CLOUD_SDK_SK"); String projectId = "{project_id}"; ICredential auth = new BasicCredentials() .withProjectId(projectId) .withAk(ak) .withSk(sk); MrsClient client = MrsClient.newBuilder() .withCredential(auth) .withRegion(MrsRegion.valueOf("<YOUR REGION>")) .build(); CreateClusterRequest request = new CreateClusterRequest(); CreateClusterReqV11 body = new CreateClusterReqV11(); List<NodeGroupV11> listbodyNodeGroups = new ArrayList<>(); listbodyNodeGroups.add( new NodeGroupV11() .withGroupName("master_node_default_group") .withNodeNum(1) .withNodeSize("s3.xlarge.2.linux.bigdata") .withRootVolumeSize("480") .withRootVolumeType("SATA") .withDataVolumeType("SATA") .withDataVolumeCount(1) .withDataVolumeSize(600) ); listbodyNodeGroups.add( new NodeGroupV11() .withGroupName("core_node_analysis_group") .withNodeNum(1) .withNodeSize("s3.xlarge.2.linux.bigdata") .withRootVolumeSize("480") .withRootVolumeType("SATA") .withDataVolumeType("SATA") .withDataVolumeCount(1) .withDataVolumeSize(600) ); List<Tag> listbodyTags = new ArrayList<>(); listbodyTags.add( new Tag() .withKey("key1") .withValue("value1") ); listbodyTags.add( new Tag() .withKey("key2") .withValue("value2") ); List<BootstrapScript.ActionStagesEnum> listBootstrapScriptsActionStages = new ArrayList<>(); listBootstrapScriptsActionStages.add(BootstrapScript.ActionStagesEnum.fromValue("AFTER_SCALE_IN")); listBootstrapScriptsActionStages.add(BootstrapScript.ActionStagesEnum.fromValue("AFTER_SCALE_OUT")); List<String> listBootstrapScriptsNodes = new ArrayList<>(); listBootstrapScriptsNodes.add("master"); List<BootstrapScript.ActionStagesEnum> listBootstrapScriptsActionStages1 = new ArrayList<>(); listBootstrapScriptsActionStages1.add(BootstrapScript.ActionStagesEnum.fromValue("BEFORE_COMPONENT_FIRST_START")); listBootstrapScriptsActionStages1.add(BootstrapScript.ActionStagesEnum.fromValue("BEFORE_SCALE_IN")); List<String> listBootstrapScriptsNodes1 = new ArrayList<>(); listBootstrapScriptsNodes1.add("master"); listBootstrapScriptsNodes1.add("core"); listBootstrapScriptsNodes1.add("task"); List<BootstrapScript> listbodyBootstrapScripts = new ArrayList<>(); listbodyBootstrapScripts.add( new BootstrapScript() .withName("Modify os config") .withUri("s3a://XXX/modify_os_config.sh") .withParameters("param1 param2") .withNodes(listBootstrapScriptsNodes1) .withActiveMaster(false) .withFailAction(BootstrapScript.FailActionEnum.fromValue("continue")) .withBeforeComponentStart(true) .withStartTime(1667892101L) .withState(BootstrapScript.StateEnum.fromValue("IN_PROGRESS")) .withActionStages(listBootstrapScriptsActionStages1) ); listbodyBootstrapScripts.add( new BootstrapScript() .withName("Install zepplin") .withUri("s3a://XXX/zeppelin_install.sh") .withParameters("") .withNodes(listBootstrapScriptsNodes) .withActiveMaster(true) .withFailAction(BootstrapScript.FailActionEnum.fromValue("continue")) .withBeforeComponentStart(false) .withStartTime(1667892101L) .withState(BootstrapScript.StateEnum.fromValue("IN_PROGRESS")) .withActionStages(listBootstrapScriptsActionStages) ); List<AddJobsReqV11> listbodyAddJobs = new ArrayList<>(); listbodyAddJobs.add( new AddJobsReqV11() .withJobType(1) .withJobName("tenji111") .withJarPath("s3a://bigdata/program/hadoop-mapreduce-examples-2.7.2.jar") .withArguments("wordcount") .withInput("s3a://bigdata/input/wd_1k/") .withOutput("s3a://bigdata/ouput/") .withJobLog("s3a://bigdata/log/") .withHiveScriptPath("") .withHql("") .withShutdownCluster(true) .withSubmitJobOnceClusterRun(true) .withFileAction("") ); List<ComponentAmbV11> listbodyComponentList = new ArrayList<>(); listbodyComponentList.add( new ComponentAmbV11() .withComponentName("Hadoop") ); listbodyComponentList.add( new ComponentAmbV11() .withComponentName("Spark") ); listbodyComponentList.add( new ComponentAmbV11() .withComponentName("HBase") ); listbodyComponentList.add( new ComponentAmbV11() .withComponentName("Hive") ); listbodyComponentList.add( new ComponentAmbV11() .withComponentName("Presto") ); listbodyComponentList.add( new ComponentAmbV11() .withComponentName("Tez") ); listbodyComponentList.add( new ComponentAmbV11() .withComponentName("Hue") ); listbodyComponentList.add( new ComponentAmbV11() .withComponentName("Loader") ); listbodyComponentList.add( new ComponentAmbV11() .withComponentName("Flink") ); body.withNodeGroups(listbodyNodeGroups); body.withLoginMode(CreateClusterReqV11.LoginModeEnum.NUMBER_1); body.withTags(listbodyTags); body.withEnterpriseProjectId("0"); body.withLogCollection(CreateClusterReqV11.LogCollectionEnum.NUMBER_1); body.withClusterType(CreateClusterReqV11.ClusterTypeEnum.NUMBER_0); body.withSafeMode(CreateClusterReqV11.SafeModeEnum.NUMBER_0); body.withClusterMasterSecret(""); body.withClusterAdminSecret(""); body.withBootstrapScripts(listbodyBootstrapScripts); body.withAddJobs(listbodyAddJobs); body.withSecurityGroupsId("4820eace-66ad-4f2c-8d46-cf340e3029dd"); body.withSubnetName("subnet-4b44"); body.withSubnetId("67984709-e15e-4e86-9886-d76712d4e00a"); body.withVpcId("4a365717-67be-4f33-80c5-98e98a813af8"); body.withAvailableZoneId("d573142f24894ef3bd3664de068b44b0"); body.withComponentList(listbodyComponentList); body.withVpc("vpc-4b1c"); body.withDataCenter(""); body.withBillingType(CreateClusterReqV11.BillingTypeEnum.NUMBER_12); body.withClusterName("mrs_HEbK"); body.withClusterVersion("MRS 3.1.0"); request.withBody(body); try { CreateClusterResponse response = client.createCluster(request); System.out.println(response.toString()); } catch (ConnectionException e) { e.printStackTrace(); } catch (RequestTimeoutException e) { e.printStackTrace(); } catch (ServiceResponseException e) { e.printStackTrace(); System.out.println(e.getHttpStatusCode()); System.out.println(e.getRequestId()); System.out.println(e.getErrorCode()); System.out.println(e.getErrorMsg()); } } }
-
不使用node_groups参数组,创建一个关闭“集群高可用”功能、最小规格的集群,集群版本号为MRS 3.1.0。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165
package com.huaweicloud.sdk.test; import com.huaweicloud.sdk.core.auth.ICredential; import com.huaweicloud.sdk.core.auth.BasicCredentials; import com.huaweicloud.sdk.core.exception.ConnectionException; import com.huaweicloud.sdk.core.exception.RequestTimeoutException; import com.huaweicloud.sdk.core.exception.ServiceResponseException; import com.huaweicloud.sdk.mrs.v1.region.MrsRegion; import com.huaweicloud.sdk.mrs.v1.*; import com.huaweicloud.sdk.mrs.v1.model.*; import java.util.List; import java.util.ArrayList; public class CreateClusterSolution { public static void main(String[] args) { // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security. // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment String ak = System.getenv("CLOUD_SDK_AK"); String sk = System.getenv("CLOUD_SDK_SK"); String projectId = "{project_id}"; ICredential auth = new BasicCredentials() .withProjectId(projectId) .withAk(ak) .withSk(sk); MrsClient client = MrsClient.newBuilder() .withCredential(auth) .withRegion(MrsRegion.valueOf("<YOUR REGION>")) .build(); CreateClusterRequest request = new CreateClusterRequest(); CreateClusterReqV11 body = new CreateClusterReqV11(); List<Tag> listbodyTags = new ArrayList<>(); listbodyTags.add( new Tag() .withKey("key1") .withValue("value1") ); listbodyTags.add( new Tag() .withKey("key2") .withValue("value2") ); List<BootstrapScript.ActionStagesEnum> listBootstrapScriptsActionStages = new ArrayList<>(); listBootstrapScriptsActionStages.add(BootstrapScript.ActionStagesEnum.fromValue("AFTER_SCALE_IN")); listBootstrapScriptsActionStages.add(BootstrapScript.ActionStagesEnum.fromValue("AFTER_SCALE_OUT")); List<String> listBootstrapScriptsNodes = new ArrayList<>(); listBootstrapScriptsNodes.add("master"); List<BootstrapScript> listbodyBootstrapScripts = new ArrayList<>(); listbodyBootstrapScripts.add( new BootstrapScript() .withName("Install zepplin") .withUri("s3a://XXX/zeppelin_install.sh") .withParameters("") .withNodes(listBootstrapScriptsNodes) .withActiveMaster(false) .withFailAction(BootstrapScript.FailActionEnum.fromValue("continue")) .withBeforeComponentStart(false) .withStartTime(1667892101L) .withState(BootstrapScript.StateEnum.fromValue("IN_PROGRESS")) .withActionStages(listBootstrapScriptsActionStages) ); List<AddJobsReqV11> listbodyAddJobs = new ArrayList<>(); listbodyAddJobs.add( new AddJobsReqV11() .withJobType(1) .withJobName("tenji111") .withJarPath("s3a://bigdata/program/hadoop-mapreduce-examples-XXX.jar") .withArguments("wordcount") .withInput("s3a://bigdata/input/wd_1k/") .withOutput("s3a://bigdata/ouput/") .withJobLog("s3a://bigdata/log/") .withHiveScriptPath("") .withHql("") .withShutdownCluster(false) .withSubmitJobOnceClusterRun(true) .withFileAction("") ); List<ComponentAmbV11> listbodyComponentList = new ArrayList<>(); listbodyComponentList.add( new ComponentAmbV11() .withComponentName("Hadoop") ); listbodyComponentList.add( new ComponentAmbV11() .withComponentName("Spark") ); listbodyComponentList.add( new ComponentAmbV11() .withComponentName("HBase") ); listbodyComponentList.add( new ComponentAmbV11() .withComponentName("Hive") ); listbodyComponentList.add( new ComponentAmbV11() .withComponentName("Presto") ); listbodyComponentList.add( new ComponentAmbV11() .withComponentName("Tez") ); listbodyComponentList.add( new ComponentAmbV11() .withComponentName("Hue") ); listbodyComponentList.add( new ComponentAmbV11() .withComponentName("Loader") ); listbodyComponentList.add( new ComponentAmbV11() .withComponentName("Flink") ); body.withLoginMode(CreateClusterReqV11.LoginModeEnum.NUMBER_1); body.withTags(listbodyTags); body.withEnterpriseProjectId("0"); body.withLogCollection(CreateClusterReqV11.LogCollectionEnum.NUMBER_1); body.withClusterType(CreateClusterReqV11.ClusterTypeEnum.NUMBER_0); body.withSafeMode(CreateClusterReqV11.SafeModeEnum.NUMBER_0); body.withClusterAdminSecret("******"); body.withNodePublicCertName("SSHkey-bba1"); body.withBootstrapScripts(listbodyBootstrapScripts); body.withCoreDataVolumeCount(1); body.withCoreDataVolumeSize(600); body.withCoreDataVolumeType(CreateClusterReqV11.CoreDataVolumeTypeEnum.fromValue("SATA")); body.withMasterDataVolumeCount(CreateClusterReqV11.MasterDataVolumeCountEnum.NUMBER_1); body.withMasterDataVolumeSize(600); body.withMasterDataVolumeType(CreateClusterReqV11.MasterDataVolumeTypeEnum.fromValue("SATA")); body.withAddJobs(listbodyAddJobs); body.withSecurityGroupsId(""); body.withSubnetName("subnet"); body.withSubnetId("815bece0-fd22-4b65-8a6e-15788c99ee43"); body.withVpcId("5b7db34d-3534-4a6e-ac94-023cd36aaf74"); body.withAvailableZoneId("d573142f24894ef3bd3664de068b44b0"); body.withComponentList(listbodyComponentList); body.withCoreNodeSize("s1.xlarge.linux.bigdata"); body.withMasterNodeSize("s3.2xlarge.2.linux.bigdata"); body.withVpc("vpc1"); body.withDataCenter(""); body.withBillingType(CreateClusterReqV11.BillingTypeEnum.NUMBER_12); body.withCoreNodeNum(1); body.withMasterNodeNum(1); body.withClusterName("newcluster"); body.withClusterVersion("MRS 3.1.0"); request.withBody(body); try { CreateClusterResponse response = client.createCluster(request); System.out.println(response.toString()); } catch (ConnectionException e) { e.printStackTrace(); } catch (RequestTimeoutException e) { e.printStackTrace(); } catch (ServiceResponseException e) { e.printStackTrace(); System.out.println(e.getHttpStatusCode()); System.out.println(e.getRequestId()); System.out.println(e.getErrorCode()); System.out.println(e.getErrorMsg()); } } }
-
使用node_groups参数组,创建一个启用“集群高可用”功能的集群,集群版本号为MRS 3.1.0。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267
# coding: utf-8 import os from huaweicloudsdkcore.auth.credentials import BasicCredentials from huaweicloudsdkmrs.v1.region.mrs_region import MrsRegion from huaweicloudsdkcore.exceptions import exceptions from huaweicloudsdkmrs.v1 import * if __name__ == "__main__": # The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security. # In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment ak = os.environ["CLOUD_SDK_AK"] sk = os.environ["CLOUD_SDK_SK"] projectId = "{project_id}" credentials = BasicCredentials(ak, sk, projectId) client = MrsClient.new_builder() \ .with_credentials(credentials) \ .with_region(MrsRegion.value_of("<YOUR REGION>")) \ .build() try: request = CreateClusterRequest() listNodesExecScripts = [ "master", "core", "task" ] listNodesExecScripts1 = [ "master", "core", "task" ] listExecScriptsAutoScalingPolicy = [ ScaleScript( name="before_scale_out", uri="s3a://XXX/zeppelin_install.sh", parameters="${mrs_scale_node_num} ${mrs_scale_type} xxx", nodes=listNodesExecScripts1, active_master=True, fail_action="continue", action_stage="before_scale_out" ), ScaleScript( name="after_scale_out", uri="s3a://XXX/storm_rebalance.sh", parameters="${mrs_scale_node_hostnames} ${mrs_scale_node_ips}", nodes=listNodesExecScripts, active_master=True, fail_action="continue", action_stage="after_scale_out" ) ] triggerRules = Trigger( metric_name="YARNMemoryAvailablePercentage", metric_value="70", comparison_operator="GT", evaluation_periods=10 ) triggerRules1 = Trigger( metric_name="YARNMemoryAvailablePercentage", metric_value="25", comparison_operator="LT", evaluation_periods=10 ) listRulesAutoScalingPolicy = [ Rule( name="default-expand-1", adjustment_type="scale_out", cool_down_minutes=5, scaling_adjustment=1, trigger=triggerRules1 ), Rule( name="default-shrink-1", adjustment_type="scale_in", cool_down_minutes=5, scaling_adjustment=1, trigger=triggerRules ) ] listResourcesPlansAutoScalingPolicy = [ ResourcesPlan( period_type="daily", start_time="9:50", end_time="10:20", min_capacity=2, max_capacity=3 ), ResourcesPlan( period_type="daily", start_time="10:20", end_time="12:30", min_capacity=0, max_capacity=2 ) ] autoScalingPolicyNodeGroups = AutoScalingPolicy( auto_scaling_enable=True, min_capacity=1, max_capacity=3, resources_plans=listResourcesPlansAutoScalingPolicy, rules=listRulesAutoScalingPolicy, exec_scripts=listExecScriptsAutoScalingPolicy ) listNodeGroupsbody = [ NodeGroupV11( group_name="master_node_default_group", node_num=2, node_size="s3.xlarge.2.linux.bigdata", root_volume_size="480", root_volume_type="SATA", data_volume_type="SATA", data_volume_count=1, data_volume_size=600 ), NodeGroupV11( group_name="core_node_analysis_group", node_num=3, node_size="s3.xlarge.2.linux.bigdata", root_volume_size="480", root_volume_type="SATA", data_volume_type="SATA", data_volume_count=1, data_volume_size=600 ), NodeGroupV11( group_name="task_node_analysis_group", node_num=2, node_size="s3.xlarge.2.linux.bigdata", root_volume_size="480", root_volume_type="SATA", data_volume_type="SATA", data_volume_count=0, data_volume_size=600, auto_scaling_policy=autoScalingPolicyNodeGroups ) ] listTagsbody = [ Tag( key="key1", value="value1" ), Tag( key="key2", value="value2" ) ] listActionStagesBootstrapScripts = [ "AFTER_SCALE_IN", "AFTER_SCALE_OUT" ] listNodesBootstrapScripts = [ "master" ] listActionStagesBootstrapScripts1 = [ "BEFORE_COMPONENT_FIRST_START", "BEFORE_SCALE_IN" ] listNodesBootstrapScripts1 = [ "master", "core", "task" ] listBootstrapScriptsbody = [ BootstrapScript( name="Modify os config", uri="s3a://XXX/modify_os_config.sh", parameters="param1 param2", nodes=listNodesBootstrapScripts1, active_master=False, fail_action="continue", before_component_start=True, start_time=1667892101, state="IN_PROGRESS", action_stages=listActionStagesBootstrapScripts1 ), BootstrapScript( name="Install zepplin", uri="s3a://XXX/zeppelin_install.sh", parameters="", nodes=listNodesBootstrapScripts, active_master=True, fail_action="continue", before_component_start=False, start_time=1667892101, state="IN_PROGRESS", action_stages=listActionStagesBootstrapScripts ) ] listAddJobsbody = [ AddJobsReqV11( job_type=1, job_name="tenji111", jar_path="s3a://bigdata/program/hadoop-mapreduce-examples-2.7.2.jar", arguments="wordcount", input="s3a://bigdata/input/wd_1k/", output="s3a://bigdata/ouput/", job_log="s3a://bigdata/log/", hive_script_path="", hql="", shutdown_cluster=True, submit_job_once_cluster_run=True, file_action="" ) ] listComponentListbody = [ ComponentAmbV11( component_name="Hadoop" ), ComponentAmbV11( component_name="Spark" ), ComponentAmbV11( component_name="HBase" ), ComponentAmbV11( component_name="Hive" ), ComponentAmbV11( component_name="Presto" ), ComponentAmbV11( component_name="Tez" ), ComponentAmbV11( component_name="Hue" ), ComponentAmbV11( component_name="Loader" ), ComponentAmbV11( component_name="Flink" ) ] request.body = CreateClusterReqV11( node_groups=listNodeGroupsbody, login_mode=1, tags=listTagsbody, enterprise_project_id="0", log_collection=1, cluster_type=0, safe_mode=0, cluster_master_secret="", cluster_admin_secret="", bootstrap_scripts=listBootstrapScriptsbody, add_jobs=listAddJobsbody, security_groups_id="4820eace-66ad-4f2c-8d46-cf340e3029dd", subnet_name="subnet-4b44", subnet_id="67984709-e15e-4e86-9886-d76712d4e00a", vpc_id="4a365717-67be-4f33-80c5-98e98a813af8", available_zone_id="d573142f24894ef3bd3664de068b44b0", component_list=listComponentListbody, vpc="vpc-4b1c", data_center="", billing_type=12, cluster_name="mrs_HEbK", cluster_version="MRS 3.1.0" ) response = client.create_cluster(request) print(response) except exceptions.ClientRequestException as e: print(e.status_code) print(e.request_id) print(e.error_code) print(e.error_msg)
-
不使用node_groups参数组,创建一个启用“集群高可用”功能的集群,集群版本号为MRS 3.1.0。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236
# coding: utf-8 import os from huaweicloudsdkcore.auth.credentials import BasicCredentials from huaweicloudsdkmrs.v1.region.mrs_region import MrsRegion from huaweicloudsdkcore.exceptions import exceptions from huaweicloudsdkmrs.v1 import * if __name__ == "__main__": # The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security. # In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment ak = os.environ["CLOUD_SDK_AK"] sk = os.environ["CLOUD_SDK_SK"] projectId = "{project_id}" credentials = BasicCredentials(ak, sk, projectId) client = MrsClient.new_builder() \ .with_credentials(credentials) \ .with_region(MrsRegion.value_of("<YOUR REGION>")) \ .build() try: request = CreateClusterRequest() listTagsbody = [ Tag( key="key1", value="value1" ), Tag( key="key2", value="value2" ) ] listActionStagesBootstrapScripts = [ "AFTER_SCALE_IN", "AFTER_SCALE_OUT" ] listNodesBootstrapScripts = [ "master" ] listActionStagesBootstrapScripts1 = [ "BEFORE_COMPONENT_FIRST_START", "BEFORE_SCALE_IN" ] listNodesBootstrapScripts1 = [ "master", "core", "task" ] listBootstrapScriptsbody = [ BootstrapScript( name="Modifyosconfig", uri="s3a: //XXX/modify_os_config.sh", parameters="param1param2", nodes=listNodesBootstrapScripts1, active_master=False, fail_action="continue", before_component_start=True, start_time=1667892101, state="IN_PROGRESS", action_stages=listActionStagesBootstrapScripts1 ), BootstrapScript( name="Installzepplin", uri="s3a: //XXX/zeppelin_install.sh", parameters="", nodes=listNodesBootstrapScripts, active_master=True, fail_action="continue", before_component_start=False, start_time=1667892101, state="IN_PROGRESS", action_stages=listActionStagesBootstrapScripts ) ] listNodesExecScripts = [ "master", "core", "task" ] listNodesExecScripts1 = [ "master", "core", "task" ] listExecScriptsAutoScalingPolicy = [ ScaleScript( name="before_scale_out", uri="s3a: //XXX/zeppelin_install.sh", parameters="${mrs_scale_node_num}${mrs_scale_type}xxx", nodes=listNodesExecScripts1, active_master=True, fail_action="continue", action_stage="before_scale_out" ), ScaleScript( name="after_scale_out", uri="s3a: //XXX/storm_rebalance.sh", parameters="${mrs_scale_node_hostnames}${mrs_scale_node_ips}", nodes=listNodesExecScripts, active_master=True, fail_action="continue", action_stage="after_scale_out" ) ] triggerRules = Trigger( metric_name="YARNMemoryAvailablePercentage", metric_value="70", comparison_operator="GT", evaluation_periods=10 ) triggerRules1 = Trigger( metric_name="YARNMemoryAvailablePercentage", metric_value="25", comparison_operator="LT", evaluation_periods=10 ) listRulesAutoScalingPolicy = [ Rule( name="default-expand-1", adjustment_type="scale_out", cool_down_minutes=5, scaling_adjustment=1, trigger=triggerRules1 ), Rule( name="default-shrink-1", adjustment_type="scale_in", cool_down_minutes=5, scaling_adjustment=1, trigger=triggerRules ) ] listResourcesPlansAutoScalingPolicy = [ ResourcesPlan( period_type="daily", start_time="9: 50", end_time="10: 20", min_capacity=2, max_capacity=3 ), ResourcesPlan( period_type="daily", start_time="10: 20", end_time="12: 30", min_capacity=0, max_capacity=2 ) ] autoScalingPolicyTaskNodeGroups = AutoScalingPolicy( auto_scaling_enable=True, min_capacity=1, max_capacity=3, resources_plans=listResourcesPlansAutoScalingPolicy, rules=listRulesAutoScalingPolicy, exec_scripts=listExecScriptsAutoScalingPolicy ) listTaskNodeGroupsbody = [ TaskNodeGroup( node_num=2, node_size="s3.xlarge.2.linux.bigdata", data_volume_type="SATA", data_volume_count=1, data_volume_size=600, auto_scaling_policy=autoScalingPolicyTaskNodeGroups ) ] listAddJobsbody = [ AddJobsReqV11( job_type=1, job_name="tenji111", jar_path="s3a: //bigdata/program/hadoop-mapreduce-examples-2.7.2.jar", arguments="wordcount", input="s3a: //bigdata/input/wd_1k/", output="s3a: //bigdata/ouput/", job_log="s3a: //bigdata/log/", hive_script_path="", hql="", shutdown_cluster=True, submit_job_once_cluster_run=True, file_action="" ) ] listComponentListbody = [ ComponentAmbV11( component_name="Hadoop" ), ComponentAmbV11( component_name="Spark" ), ComponentAmbV11( component_name="HBase" ), ComponentAmbV11( component_name="Hive" ) ] request.body = CreateClusterReqV11( tags=listTagsbody, log_collection=1, cluster_type=0, safe_mode=0, node_public_cert_name="SSHkey-bba1", bootstrap_scripts=listBootstrapScriptsbody, task_node_groups=listTaskNodeGroupsbody, core_data_volume_count=2, core_data_volume_size=600, core_data_volume_type="SATA", master_data_volume_count=1, master_data_volume_size=600, master_data_volume_type="SATA", add_jobs=listAddJobsbody, security_groups_id="845bece1-fd22-4b45-7a6e-14338c99ee43", subnet_name="subnet", subnet_id="815bece0-fd22-4b65-8a6e-15788c99ee43", vpc_id="5b7db34d-3534-4a6e-ac94-023cd36aaf74", available_zone_id="d573142f24894ef3bd3664de068b44b0", component_list=listComponentListbody, core_node_size="s1.xlarge.linux.bigdata", master_node_size="s3.2xlarge.2.linux.bigdata", vpc="vpc1", data_center="", billing_type=12, core_node_num=3, master_node_num=2, cluster_name="newcluster", cluster_version="MRS 3.1.0" ) response = client.create_cluster(request) print(response) except exceptions.ClientRequestException as e: print(e.status_code) print(e.request_id) print(e.error_code) print(e.error_msg)
-
使用node_groups参数组,创建一个关闭“集群高可用”功能、最小规格的集群,集群版本号为MRS 3.1.0。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174
# coding: utf-8 import os from huaweicloudsdkcore.auth.credentials import BasicCredentials from huaweicloudsdkmrs.v1.region.mrs_region import MrsRegion from huaweicloudsdkcore.exceptions import exceptions from huaweicloudsdkmrs.v1 import * if __name__ == "__main__": # The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security. # In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment ak = os.environ["CLOUD_SDK_AK"] sk = os.environ["CLOUD_SDK_SK"] projectId = "{project_id}" credentials = BasicCredentials(ak, sk, projectId) client = MrsClient.new_builder() \ .with_credentials(credentials) \ .with_region(MrsRegion.value_of("<YOUR REGION>")) \ .build() try: request = CreateClusterRequest() listNodeGroupsbody = [ NodeGroupV11( group_name="master_node_default_group", node_num=1, node_size="s3.xlarge.2.linux.bigdata", root_volume_size="480", root_volume_type="SATA", data_volume_type="SATA", data_volume_count=1, data_volume_size=600 ), NodeGroupV11( group_name="core_node_analysis_group", node_num=1, node_size="s3.xlarge.2.linux.bigdata", root_volume_size="480", root_volume_type="SATA", data_volume_type="SATA", data_volume_count=1, data_volume_size=600 ) ] listTagsbody = [ Tag( key="key1", value="value1" ), Tag( key="key2", value="value2" ) ] listActionStagesBootstrapScripts = [ "AFTER_SCALE_IN", "AFTER_SCALE_OUT" ] listNodesBootstrapScripts = [ "master" ] listActionStagesBootstrapScripts1 = [ "BEFORE_COMPONENT_FIRST_START", "BEFORE_SCALE_IN" ] listNodesBootstrapScripts1 = [ "master", "core", "task" ] listBootstrapScriptsbody = [ BootstrapScript( name="Modify os config", uri="s3a://XXX/modify_os_config.sh", parameters="param1 param2", nodes=listNodesBootstrapScripts1, active_master=False, fail_action="continue", before_component_start=True, start_time=1667892101, state="IN_PROGRESS", action_stages=listActionStagesBootstrapScripts1 ), BootstrapScript( name="Install zepplin", uri="s3a://XXX/zeppelin_install.sh", parameters="", nodes=listNodesBootstrapScripts, active_master=True, fail_action="continue", before_component_start=False, start_time=1667892101, state="IN_PROGRESS", action_stages=listActionStagesBootstrapScripts ) ] listAddJobsbody = [ AddJobsReqV11( job_type=1, job_name="tenji111", jar_path="s3a://bigdata/program/hadoop-mapreduce-examples-2.7.2.jar", arguments="wordcount", input="s3a://bigdata/input/wd_1k/", output="s3a://bigdata/ouput/", job_log="s3a://bigdata/log/", hive_script_path="", hql="", shutdown_cluster=True, submit_job_once_cluster_run=True, file_action="" ) ] listComponentListbody = [ ComponentAmbV11( component_name="Hadoop" ), ComponentAmbV11( component_name="Spark" ), ComponentAmbV11( component_name="HBase" ), ComponentAmbV11( component_name="Hive" ), ComponentAmbV11( component_name="Presto" ), ComponentAmbV11( component_name="Tez" ), ComponentAmbV11( component_name="Hue" ), ComponentAmbV11( component_name="Loader" ), ComponentAmbV11( component_name="Flink" ) ] request.body = CreateClusterReqV11( node_groups=listNodeGroupsbody, login_mode=1, tags=listTagsbody, enterprise_project_id="0", log_collection=1, cluster_type=0, safe_mode=0, cluster_master_secret="", cluster_admin_secret="", bootstrap_scripts=listBootstrapScriptsbody, add_jobs=listAddJobsbody, security_groups_id="4820eace-66ad-4f2c-8d46-cf340e3029dd", subnet_name="subnet-4b44", subnet_id="67984709-e15e-4e86-9886-d76712d4e00a", vpc_id="4a365717-67be-4f33-80c5-98e98a813af8", available_zone_id="d573142f24894ef3bd3664de068b44b0", component_list=listComponentListbody, vpc="vpc-4b1c", data_center="", billing_type=12, cluster_name="mrs_HEbK", cluster_version="MRS 3.1.0" ) response = client.create_cluster(request) print(response) except exceptions.ClientRequestException as e: print(e.status_code) print(e.request_id) print(e.error_code) print(e.error_msg)
-
不使用node_groups参数组,创建一个关闭“集群高可用”功能、最小规格的集群,集群版本号为MRS 3.1.0。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140
# coding: utf-8 import os from huaweicloudsdkcore.auth.credentials import BasicCredentials from huaweicloudsdkmrs.v1.region.mrs_region import MrsRegion from huaweicloudsdkcore.exceptions import exceptions from huaweicloudsdkmrs.v1 import * if __name__ == "__main__": # The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security. # In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment ak = os.environ["CLOUD_SDK_AK"] sk = os.environ["CLOUD_SDK_SK"] projectId = "{project_id}" credentials = BasicCredentials(ak, sk, projectId) client = MrsClient.new_builder() \ .with_credentials(credentials) \ .with_region(MrsRegion.value_of("<YOUR REGION>")) \ .build() try: request = CreateClusterRequest() listTagsbody = [ Tag( key="key1", value="value1" ), Tag( key="key2", value="value2" ) ] listActionStagesBootstrapScripts = [ "AFTER_SCALE_IN", "AFTER_SCALE_OUT" ] listNodesBootstrapScripts = [ "master" ] listBootstrapScriptsbody = [ BootstrapScript( name="Install zepplin", uri="s3a://XXX/zeppelin_install.sh", parameters="", nodes=listNodesBootstrapScripts, active_master=False, fail_action="continue", before_component_start=False, start_time=1667892101, state="IN_PROGRESS", action_stages=listActionStagesBootstrapScripts ) ] listAddJobsbody = [ AddJobsReqV11( job_type=1, job_name="tenji111", jar_path="s3a://bigdata/program/hadoop-mapreduce-examples-XXX.jar", arguments="wordcount", input="s3a://bigdata/input/wd_1k/", output="s3a://bigdata/ouput/", job_log="s3a://bigdata/log/", hive_script_path="", hql="", shutdown_cluster=False, submit_job_once_cluster_run=True, file_action="" ) ] listComponentListbody = [ ComponentAmbV11( component_name="Hadoop" ), ComponentAmbV11( component_name="Spark" ), ComponentAmbV11( component_name="HBase" ), ComponentAmbV11( component_name="Hive" ), ComponentAmbV11( component_name="Presto" ), ComponentAmbV11( component_name="Tez" ), ComponentAmbV11( component_name="Hue" ), ComponentAmbV11( component_name="Loader" ), ComponentAmbV11( component_name="Flink" ) ] request.body = CreateClusterReqV11( login_mode=1, tags=listTagsbody, enterprise_project_id="0", log_collection=1, cluster_type=0, safe_mode=0, cluster_admin_secret="******", node_public_cert_name="SSHkey-bba1", bootstrap_scripts=listBootstrapScriptsbody, core_data_volume_count=1, core_data_volume_size=600, core_data_volume_type="SATA", master_data_volume_count=1, master_data_volume_size=600, master_data_volume_type="SATA", add_jobs=listAddJobsbody, security_groups_id="", subnet_name="subnet", subnet_id="815bece0-fd22-4b65-8a6e-15788c99ee43", vpc_id="5b7db34d-3534-4a6e-ac94-023cd36aaf74", available_zone_id="d573142f24894ef3bd3664de068b44b0", component_list=listComponentListbody, core_node_size="s1.xlarge.linux.bigdata", master_node_size="s3.2xlarge.2.linux.bigdata", vpc="vpc1", data_center="", billing_type=12, core_node_num=1, master_node_num=1, cluster_name="newcluster", cluster_version="MRS 3.1.0" ) response = client.create_cluster(request) print(response) except exceptions.ClientRequestException as e: print(e.status_code) print(e.request_id) print(e.error_code) print(e.error_msg)
-
使用node_groups参数组,创建一个启用“集群高可用”功能的集群,集群版本号为MRS 3.1.0。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320
package main import ( "fmt" "github.com/huaweicloud/huaweicloud-sdk-go-v3/core/auth/basic" mrs "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/mrs/v1" "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/mrs/v1/model" region "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/mrs/v1/region" ) func main() { // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security. // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment ak := os.Getenv("CLOUD_SDK_AK") sk := os.Getenv("CLOUD_SDK_SK") projectId := "{project_id}" auth := basic.NewCredentialsBuilder(). WithAk(ak). WithSk(sk). WithProjectId(projectId). Build() client := mrs.NewMrsClient( mrs.MrsClientBuilder(). WithRegion(region.ValueOf("<YOUR REGION>")). WithCredential(auth). Build()) request := &model.CreateClusterRequest{} var listNodesExecScripts = []string{ "master", "core", "task", } var listNodesExecScripts1 = []string{ "master", "core", "task", } parametersExecScripts:= "${mrs_scale_node_num} ${mrs_scale_type} xxx" activeMasterExecScripts:= true parametersExecScripts1:= "${mrs_scale_node_hostnames} ${mrs_scale_node_ips}" activeMasterExecScripts1:= true var listExecScriptsAutoScalingPolicy = []model.ScaleScript{ { Name: "before_scale_out", Uri: "s3a://XXX/zeppelin_install.sh", Parameters: ¶metersExecScripts, Nodes: listNodesExecScripts1, ActiveMaster: &activeMasterExecScripts, FailAction: model.GetScaleScriptFailActionEnum().CONTINUE, ActionStage: model.GetScaleScriptActionStageEnum().BEFORE_SCALE_OUT, }, { Name: "after_scale_out", Uri: "s3a://XXX/storm_rebalance.sh", Parameters: ¶metersExecScripts1, Nodes: listNodesExecScripts, ActiveMaster: &activeMasterExecScripts1, FailAction: model.GetScaleScriptFailActionEnum().CONTINUE, ActionStage: model.GetScaleScriptActionStageEnum().AFTER_SCALE_OUT, }, } comparisonOperatorTrigger:= "GT" triggerRules := &model.Trigger{ MetricName: "YARNMemoryAvailablePercentage", MetricValue: "70", ComparisonOperator: &comparisonOperatorTrigger, EvaluationPeriods: int32(10), } comparisonOperatorTrigger1:= "LT" triggerRules1 := &model.Trigger{ MetricName: "YARNMemoryAvailablePercentage", MetricValue: "25", ComparisonOperator: &comparisonOperatorTrigger1, EvaluationPeriods: int32(10), } var listRulesAutoScalingPolicy = []model.Rule{ { Name: "default-expand-1", AdjustmentType: model.GetRuleAdjustmentTypeEnum().SCALE_OUT, CoolDownMinutes: int32(5), ScalingAdjustment: int32(1), Trigger: triggerRules1, }, { Name: "default-shrink-1", AdjustmentType: model.GetRuleAdjustmentTypeEnum().SCALE_IN, CoolDownMinutes: int32(5), ScalingAdjustment: int32(1), Trigger: triggerRules, }, } var listResourcesPlansAutoScalingPolicy = []model.ResourcesPlan{ { PeriodType: "daily", StartTime: "9:50", EndTime: "10:20", MinCapacity: int32(2), MaxCapacity: int32(3), }, { PeriodType: "daily", StartTime: "10:20", EndTime: "12:30", MinCapacity: int32(0), MaxCapacity: int32(2), }, } autoScalingPolicyNodeGroups := &model.AutoScalingPolicy{ AutoScalingEnable: true, MinCapacity: int32(1), MaxCapacity: int32(3), ResourcesPlans: &listResourcesPlansAutoScalingPolicy, Rules: &listRulesAutoScalingPolicy, ExecScripts: &listExecScriptsAutoScalingPolicy, } rootVolumeSizeNodeGroups:= "480" rootVolumeTypeNodeGroups:= "SATA" dataVolumeTypeNodeGroups:= "SATA" dataVolumeCountNodeGroups:= int32(1) dataVolumeSizeNodeGroups:= int32(600) rootVolumeSizeNodeGroups1:= "480" rootVolumeTypeNodeGroups1:= "SATA" dataVolumeTypeNodeGroups1:= "SATA" dataVolumeCountNodeGroups1:= int32(1) dataVolumeSizeNodeGroups1:= int32(600) rootVolumeSizeNodeGroups2:= "480" rootVolumeTypeNodeGroups2:= "SATA" dataVolumeTypeNodeGroups2:= "SATA" dataVolumeCountNodeGroups2:= int32(0) dataVolumeSizeNodeGroups2:= int32(600) var listNodeGroupsbody = []model.NodeGroupV11{ { GroupName: "master_node_default_group", NodeNum: int32(2), NodeSize: "s3.xlarge.2.linux.bigdata", RootVolumeSize: &rootVolumeSizeNodeGroups, RootVolumeType: &rootVolumeTypeNodeGroups, DataVolumeType: &dataVolumeTypeNodeGroups, DataVolumeCount: &dataVolumeCountNodeGroups, DataVolumeSize: &dataVolumeSizeNodeGroups, }, { GroupName: "core_node_analysis_group", NodeNum: int32(3), NodeSize: "s3.xlarge.2.linux.bigdata", RootVolumeSize: &rootVolumeSizeNodeGroups1, RootVolumeType: &rootVolumeTypeNodeGroups1, DataVolumeType: &dataVolumeTypeNodeGroups1, DataVolumeCount: &dataVolumeCountNodeGroups1, DataVolumeSize: &dataVolumeSizeNodeGroups1, }, { GroupName: "task_node_analysis_group", NodeNum: int32(2), NodeSize: "s3.xlarge.2.linux.bigdata", RootVolumeSize: &rootVolumeSizeNodeGroups2, RootVolumeType: &rootVolumeTypeNodeGroups2, DataVolumeType: &dataVolumeTypeNodeGroups2, DataVolumeCount: &dataVolumeCountNodeGroups2, DataVolumeSize: &dataVolumeSizeNodeGroups2, AutoScalingPolicy: autoScalingPolicyNodeGroups, }, } var listTagsbody = []model.Tag{ { Key: "key1", Value: "value1", }, { Key: "key2", Value: "value2", }, } var listActionStagesBootstrapScripts = []model.BootstrapScriptActionStages{ model.GetBootstrapScriptActionStagesEnum().AFTER_SCALE_IN, model.GetBootstrapScriptActionStagesEnum().AFTER_SCALE_OUT, } var listNodesBootstrapScripts = []string{ "master", } var listActionStagesBootstrapScripts1 = []model.BootstrapScriptActionStages{ model.GetBootstrapScriptActionStagesEnum().BEFORE_COMPONENT_FIRST_START, model.GetBootstrapScriptActionStagesEnum().BEFORE_SCALE_IN, } var listNodesBootstrapScripts1 = []string{ "master", "core", "task", } parametersBootstrapScripts:= "param1 param2" activeMasterBootstrapScripts:= false beforeComponentStartBootstrapScripts:= true startTimeBootstrapScripts:= int64(1667892101) stateBootstrapScripts:= model.GetBootstrapScriptStateEnum().IN_PROGRESS parametersBootstrapScripts1:= "" activeMasterBootstrapScripts1:= true beforeComponentStartBootstrapScripts1:= false startTimeBootstrapScripts1:= int64(1667892101) stateBootstrapScripts1:= model.GetBootstrapScriptStateEnum().IN_PROGRESS var listBootstrapScriptsbody = []model.BootstrapScript{ { Name: "Modify os config", Uri: "s3a://XXX/modify_os_config.sh", Parameters: ¶metersBootstrapScripts, Nodes: listNodesBootstrapScripts1, ActiveMaster: &activeMasterBootstrapScripts, FailAction: model.GetBootstrapScriptFailActionEnum().CONTINUE, BeforeComponentStart: &beforeComponentStartBootstrapScripts, StartTime: &startTimeBootstrapScripts, State: &stateBootstrapScripts, ActionStages: &listActionStagesBootstrapScripts1, }, { Name: "Install zepplin", Uri: "s3a://XXX/zeppelin_install.sh", Parameters: ¶metersBootstrapScripts1, Nodes: listNodesBootstrapScripts, ActiveMaster: &activeMasterBootstrapScripts1, FailAction: model.GetBootstrapScriptFailActionEnum().CONTINUE, BeforeComponentStart: &beforeComponentStartBootstrapScripts1, StartTime: &startTimeBootstrapScripts1, State: &stateBootstrapScripts1, ActionStages: &listActionStagesBootstrapScripts, }, } jarPathAddJobs:= "s3a://bigdata/program/hadoop-mapreduce-examples-2.7.2.jar" argumentsAddJobs:= "wordcount" inputAddJobs:= "s3a://bigdata/input/wd_1k/" outputAddJobs:= "s3a://bigdata/ouput/" jobLogAddJobs:= "s3a://bigdata/log/" hiveScriptPathAddJobs:= "" hqlAddJobs:= "" shutdownClusterAddJobs:= true fileActionAddJobs:= "" var listAddJobsbody = []model.AddJobsReqV11{ { JobType: int32(1), JobName: "tenji111", JarPath: &jarPathAddJobs, Arguments: &argumentsAddJobs, Input: &inputAddJobs, Output: &outputAddJobs, JobLog: &jobLogAddJobs, HiveScriptPath: &hiveScriptPathAddJobs, Hql: &hqlAddJobs, ShutdownCluster: &shutdownClusterAddJobs, SubmitJobOnceClusterRun: true, FileAction: &fileActionAddJobs, }, } var listComponentListbody = []model.ComponentAmbV11{ { ComponentName: "Hadoop", }, { ComponentName: "Spark", }, { ComponentName: "HBase", }, { ComponentName: "Hive", }, { ComponentName: "Presto", }, { ComponentName: "Tez", }, { ComponentName: "Hue", }, { ComponentName: "Loader", }, { ComponentName: "Flink", }, } loginModeCreateClusterReqV11:= model.GetCreateClusterReqV11LoginModeEnum().E_1 enterpriseProjectIdCreateClusterReqV11:= "0" logCollectionCreateClusterReqV11:= model.GetCreateClusterReqV11LogCollectionEnum().E_1 clusterTypeCreateClusterReqV11:= model.GetCreateClusterReqV11ClusterTypeEnum().E_0 clusterMasterSecretCreateClusterReqV11:= "" clusterAdminSecretCreateClusterReqV11:= "" securityGroupsIdCreateClusterReqV11:= "4820eace-66ad-4f2c-8d46-cf340e3029dd" request.Body = &model.CreateClusterReqV11{ NodeGroups: &listNodeGroupsbody, LoginMode: &loginModeCreateClusterReqV11, Tags: &listTagsbody, EnterpriseProjectId: &enterpriseProjectIdCreateClusterReqV11, LogCollection: &logCollectionCreateClusterReqV11, ClusterType: &clusterTypeCreateClusterReqV11, SafeMode: model.GetCreateClusterReqV11SafeModeEnum().E_0, ClusterMasterSecret: &clusterMasterSecretCreateClusterReqV11, ClusterAdminSecret: &clusterAdminSecretCreateClusterReqV11, BootstrapScripts: &listBootstrapScriptsbody, AddJobs: &listAddJobsbody, SecurityGroupsId: &securityGroupsIdCreateClusterReqV11, SubnetName: "subnet-4b44", SubnetId: "67984709-e15e-4e86-9886-d76712d4e00a", VpcId: "4a365717-67be-4f33-80c5-98e98a813af8", AvailableZoneId: "d573142f24894ef3bd3664de068b44b0", ComponentList: listComponentListbody, Vpc: "vpc-4b1c", DataCenter: "", BillingType: model.GetCreateClusterReqV11BillingTypeEnum().E_12, ClusterName: "mrs_HEbK", ClusterVersion: "MRS 3.1.0", } response, err := client.CreateCluster(request) if err == nil { fmt.Printf("%+v\n", response) } else { fmt.Println(err) } }
-
不使用node_groups参数组,创建一个启用“集群高可用”功能的集群,集群版本号为MRS 3.1.0。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281
package main import ( "fmt" "github.com/huaweicloud/huaweicloud-sdk-go-v3/core/auth/basic" mrs "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/mrs/v1" "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/mrs/v1/model" region "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/mrs/v1/region" ) func main() { // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security. // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment ak := os.Getenv("CLOUD_SDK_AK") sk := os.Getenv("CLOUD_SDK_SK") projectId := "{project_id}" auth := basic.NewCredentialsBuilder(). WithAk(ak). WithSk(sk). WithProjectId(projectId). Build() client := mrs.NewMrsClient( mrs.MrsClientBuilder(). WithRegion(region.ValueOf("<YOUR REGION>")). WithCredential(auth). Build()) request := &model.CreateClusterRequest{} var listTagsbody = []model.Tag{ { Key: "key1", Value: "value1", }, { Key: "key2", Value: "value2", }, } var listActionStagesBootstrapScripts = []model.BootstrapScriptActionStages{ model.GetBootstrapScriptActionStagesEnum().AFTER_SCALE_IN, model.GetBootstrapScriptActionStagesEnum().AFTER_SCALE_OUT, } var listNodesBootstrapScripts = []string{ "master", } var listActionStagesBootstrapScripts1 = []model.BootstrapScriptActionStages{ model.GetBootstrapScriptActionStagesEnum().BEFORE_COMPONENT_FIRST_START, model.GetBootstrapScriptActionStagesEnum().BEFORE_SCALE_IN, } var listNodesBootstrapScripts1 = []string{ "master", "core", "task", } parametersBootstrapScripts:= "param1param2" activeMasterBootstrapScripts:= false beforeComponentStartBootstrapScripts:= true startTimeBootstrapScripts:= int64(1667892101) stateBootstrapScripts:= model.GetBootstrapScriptStateEnum().IN_PROGRESS parametersBootstrapScripts1:= "" activeMasterBootstrapScripts1:= true beforeComponentStartBootstrapScripts1:= false startTimeBootstrapScripts1:= int64(1667892101) stateBootstrapScripts1:= model.GetBootstrapScriptStateEnum().IN_PROGRESS var listBootstrapScriptsbody = []model.BootstrapScript{ { Name: "Modifyosconfig", Uri: "s3a: //XXX/modify_os_config.sh", Parameters: ¶metersBootstrapScripts, Nodes: listNodesBootstrapScripts1, ActiveMaster: &activeMasterBootstrapScripts, FailAction: model.GetBootstrapScriptFailActionEnum().CONTINUE, BeforeComponentStart: &beforeComponentStartBootstrapScripts, StartTime: &startTimeBootstrapScripts, State: &stateBootstrapScripts, ActionStages: &listActionStagesBootstrapScripts1, }, { Name: "Installzepplin", Uri: "s3a: //XXX/zeppelin_install.sh", Parameters: ¶metersBootstrapScripts1, Nodes: listNodesBootstrapScripts, ActiveMaster: &activeMasterBootstrapScripts1, FailAction: model.GetBootstrapScriptFailActionEnum().CONTINUE, BeforeComponentStart: &beforeComponentStartBootstrapScripts1, StartTime: &startTimeBootstrapScripts1, State: &stateBootstrapScripts1, ActionStages: &listActionStagesBootstrapScripts, }, } var listNodesExecScripts = []string{ "master", "core", "task", } var listNodesExecScripts1 = []string{ "master", "core", "task", } parametersExecScripts:= "${mrs_scale_node_num}${mrs_scale_type}xxx" activeMasterExecScripts:= true parametersExecScripts1:= "${mrs_scale_node_hostnames}${mrs_scale_node_ips}" activeMasterExecScripts1:= true var listExecScriptsAutoScalingPolicy = []model.ScaleScript{ { Name: "before_scale_out", Uri: "s3a: //XXX/zeppelin_install.sh", Parameters: ¶metersExecScripts, Nodes: listNodesExecScripts1, ActiveMaster: &activeMasterExecScripts, FailAction: model.GetScaleScriptFailActionEnum().CONTINUE, ActionStage: model.GetScaleScriptActionStageEnum().BEFORE_SCALE_OUT, }, { Name: "after_scale_out", Uri: "s3a: //XXX/storm_rebalance.sh", Parameters: ¶metersExecScripts1, Nodes: listNodesExecScripts, ActiveMaster: &activeMasterExecScripts1, FailAction: model.GetScaleScriptFailActionEnum().CONTINUE, ActionStage: model.GetScaleScriptActionStageEnum().AFTER_SCALE_OUT, }, } comparisonOperatorTrigger:= "GT" triggerRules := &model.Trigger{ MetricName: "YARNMemoryAvailablePercentage", MetricValue: "70", ComparisonOperator: &comparisonOperatorTrigger, EvaluationPeriods: int32(10), } comparisonOperatorTrigger1:= "LT" triggerRules1 := &model.Trigger{ MetricName: "YARNMemoryAvailablePercentage", MetricValue: "25", ComparisonOperator: &comparisonOperatorTrigger1, EvaluationPeriods: int32(10), } var listRulesAutoScalingPolicy = []model.Rule{ { Name: "default-expand-1", AdjustmentType: model.GetRuleAdjustmentTypeEnum().SCALE_OUT, CoolDownMinutes: int32(5), ScalingAdjustment: int32(1), Trigger: triggerRules1, }, { Name: "default-shrink-1", AdjustmentType: model.GetRuleAdjustmentTypeEnum().SCALE_IN, CoolDownMinutes: int32(5), ScalingAdjustment: int32(1), Trigger: triggerRules, }, } var listResourcesPlansAutoScalingPolicy = []model.ResourcesPlan{ { PeriodType: "daily", StartTime: "9: 50", EndTime: "10: 20", MinCapacity: int32(2), MaxCapacity: int32(3), }, { PeriodType: "daily", StartTime: "10: 20", EndTime: "12: 30", MinCapacity: int32(0), MaxCapacity: int32(2), }, } autoScalingPolicyTaskNodeGroups := &model.AutoScalingPolicy{ AutoScalingEnable: true, MinCapacity: int32(1), MaxCapacity: int32(3), ResourcesPlans: &listResourcesPlansAutoScalingPolicy, Rules: &listRulesAutoScalingPolicy, ExecScripts: &listExecScriptsAutoScalingPolicy, } var listTaskNodeGroupsbody = []model.TaskNodeGroup{ { NodeNum: int32(2), NodeSize: "s3.xlarge.2.linux.bigdata", DataVolumeType: model.GetTaskNodeGroupDataVolumeTypeEnum().SATA, DataVolumeCount: int32(1), DataVolumeSize: int32(600), AutoScalingPolicy: autoScalingPolicyTaskNodeGroups, }, } jarPathAddJobs:= "s3a: //bigdata/program/hadoop-mapreduce-examples-2.7.2.jar" argumentsAddJobs:= "wordcount" inputAddJobs:= "s3a: //bigdata/input/wd_1k/" outputAddJobs:= "s3a: //bigdata/ouput/" jobLogAddJobs:= "s3a: //bigdata/log/" hiveScriptPathAddJobs:= "" hqlAddJobs:= "" shutdownClusterAddJobs:= true fileActionAddJobs:= "" var listAddJobsbody = []model.AddJobsReqV11{ { JobType: int32(1), JobName: "tenji111", JarPath: &jarPathAddJobs, Arguments: &argumentsAddJobs, Input: &inputAddJobs, Output: &outputAddJobs, JobLog: &jobLogAddJobs, HiveScriptPath: &hiveScriptPathAddJobs, Hql: &hqlAddJobs, ShutdownCluster: &shutdownClusterAddJobs, SubmitJobOnceClusterRun: true, FileAction: &fileActionAddJobs, }, } var listComponentListbody = []model.ComponentAmbV11{ { ComponentName: "Hadoop", }, { ComponentName: "Spark", }, { ComponentName: "HBase", }, { ComponentName: "Hive", }, } logCollectionCreateClusterReqV11:= model.GetCreateClusterReqV11LogCollectionEnum().E_1 clusterTypeCreateClusterReqV11:= model.GetCreateClusterReqV11ClusterTypeEnum().E_0 nodePublicCertNameCreateClusterReqV11:= "SSHkey-bba1" coreDataVolumeCountCreateClusterReqV11:= int32(2) coreDataVolumeSizeCreateClusterReqV11:= int32(600) coreDataVolumeTypeCreateClusterReqV11:= model.GetCreateClusterReqV11CoreDataVolumeTypeEnum().SATA masterDataVolumeCountCreateClusterReqV11:= model.GetCreateClusterReqV11MasterDataVolumeCountEnum().E_1 masterDataVolumeSizeCreateClusterReqV11:= int32(600) masterDataVolumeTypeCreateClusterReqV11:= model.GetCreateClusterReqV11MasterDataVolumeTypeEnum().SATA securityGroupsIdCreateClusterReqV11:= "845bece1-fd22-4b45-7a6e-14338c99ee43" coreNodeSizeCreateClusterReqV11:= "s1.xlarge.linux.bigdata" masterNodeSizeCreateClusterReqV11:= "s3.2xlarge.2.linux.bigdata" coreNodeNumCreateClusterReqV11:= int32(3) masterNodeNumCreateClusterReqV11:= int32(2) request.Body = &model.CreateClusterReqV11{ Tags: &listTagsbody, LogCollection: &logCollectionCreateClusterReqV11, ClusterType: &clusterTypeCreateClusterReqV11, SafeMode: model.GetCreateClusterReqV11SafeModeEnum().E_0, NodePublicCertName: &nodePublicCertNameCreateClusterReqV11, BootstrapScripts: &listBootstrapScriptsbody, TaskNodeGroups: &listTaskNodeGroupsbody, CoreDataVolumeCount: &coreDataVolumeCountCreateClusterReqV11, CoreDataVolumeSize: &coreDataVolumeSizeCreateClusterReqV11, CoreDataVolumeType: &coreDataVolumeTypeCreateClusterReqV11, MasterDataVolumeCount: &masterDataVolumeCountCreateClusterReqV11, MasterDataVolumeSize: &masterDataVolumeSizeCreateClusterReqV11, MasterDataVolumeType: &masterDataVolumeTypeCreateClusterReqV11, AddJobs: &listAddJobsbody, SecurityGroupsId: &securityGroupsIdCreateClusterReqV11, SubnetName: "subnet", SubnetId: "815bece0-fd22-4b65-8a6e-15788c99ee43", VpcId: "5b7db34d-3534-4a6e-ac94-023cd36aaf74", AvailableZoneId: "d573142f24894ef3bd3664de068b44b0", ComponentList: listComponentListbody, CoreNodeSize: &coreNodeSizeCreateClusterReqV11, MasterNodeSize: &masterNodeSizeCreateClusterReqV11, Vpc: "vpc1", DataCenter: "", BillingType: model.GetCreateClusterReqV11BillingTypeEnum().E_12, CoreNodeNum: &coreNodeNumCreateClusterReqV11, MasterNodeNum: &masterNodeNumCreateClusterReqV11, ClusterName: "newcluster", ClusterVersion: "MRS 3.1.0", } response, err := client.CreateCluster(request) if err == nil { fmt.Printf("%+v\n", response) } else { fmt.Println(err) } }
-
使用node_groups参数组,创建一个关闭“集群高可用”功能、最小规格的集群,集群版本号为MRS 3.1.0。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216
package main import ( "fmt" "github.com/huaweicloud/huaweicloud-sdk-go-v3/core/auth/basic" mrs "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/mrs/v1" "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/mrs/v1/model" region "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/mrs/v1/region" ) func main() { // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security. // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment ak := os.Getenv("CLOUD_SDK_AK") sk := os.Getenv("CLOUD_SDK_SK") projectId := "{project_id}" auth := basic.NewCredentialsBuilder(). WithAk(ak). WithSk(sk). WithProjectId(projectId). Build() client := mrs.NewMrsClient( mrs.MrsClientBuilder(). WithRegion(region.ValueOf("<YOUR REGION>")). WithCredential(auth). Build()) request := &model.CreateClusterRequest{} rootVolumeSizeNodeGroups:= "480" rootVolumeTypeNodeGroups:= "SATA" dataVolumeTypeNodeGroups:= "SATA" dataVolumeCountNodeGroups:= int32(1) dataVolumeSizeNodeGroups:= int32(600) rootVolumeSizeNodeGroups1:= "480" rootVolumeTypeNodeGroups1:= "SATA" dataVolumeTypeNodeGroups1:= "SATA" dataVolumeCountNodeGroups1:= int32(1) dataVolumeSizeNodeGroups1:= int32(600) var listNodeGroupsbody = []model.NodeGroupV11{ { GroupName: "master_node_default_group", NodeNum: int32(1), NodeSize: "s3.xlarge.2.linux.bigdata", RootVolumeSize: &rootVolumeSizeNodeGroups, RootVolumeType: &rootVolumeTypeNodeGroups, DataVolumeType: &dataVolumeTypeNodeGroups, DataVolumeCount: &dataVolumeCountNodeGroups, DataVolumeSize: &dataVolumeSizeNodeGroups, }, { GroupName: "core_node_analysis_group", NodeNum: int32(1), NodeSize: "s3.xlarge.2.linux.bigdata", RootVolumeSize: &rootVolumeSizeNodeGroups1, RootVolumeType: &rootVolumeTypeNodeGroups1, DataVolumeType: &dataVolumeTypeNodeGroups1, DataVolumeCount: &dataVolumeCountNodeGroups1, DataVolumeSize: &dataVolumeSizeNodeGroups1, }, } var listTagsbody = []model.Tag{ { Key: "key1", Value: "value1", }, { Key: "key2", Value: "value2", }, } var listActionStagesBootstrapScripts = []model.BootstrapScriptActionStages{ model.GetBootstrapScriptActionStagesEnum().AFTER_SCALE_IN, model.GetBootstrapScriptActionStagesEnum().AFTER_SCALE_OUT, } var listNodesBootstrapScripts = []string{ "master", } var listActionStagesBootstrapScripts1 = []model.BootstrapScriptActionStages{ model.GetBootstrapScriptActionStagesEnum().BEFORE_COMPONENT_FIRST_START, model.GetBootstrapScriptActionStagesEnum().BEFORE_SCALE_IN, } var listNodesBootstrapScripts1 = []string{ "master", "core", "task", } parametersBootstrapScripts:= "param1 param2" activeMasterBootstrapScripts:= false beforeComponentStartBootstrapScripts:= true startTimeBootstrapScripts:= int64(1667892101) stateBootstrapScripts:= model.GetBootstrapScriptStateEnum().IN_PROGRESS parametersBootstrapScripts1:= "" activeMasterBootstrapScripts1:= true beforeComponentStartBootstrapScripts1:= false startTimeBootstrapScripts1:= int64(1667892101) stateBootstrapScripts1:= model.GetBootstrapScriptStateEnum().IN_PROGRESS var listBootstrapScriptsbody = []model.BootstrapScript{ { Name: "Modify os config", Uri: "s3a://XXX/modify_os_config.sh", Parameters: ¶metersBootstrapScripts, Nodes: listNodesBootstrapScripts1, ActiveMaster: &activeMasterBootstrapScripts, FailAction: model.GetBootstrapScriptFailActionEnum().CONTINUE, BeforeComponentStart: &beforeComponentStartBootstrapScripts, StartTime: &startTimeBootstrapScripts, State: &stateBootstrapScripts, ActionStages: &listActionStagesBootstrapScripts1, }, { Name: "Install zepplin", Uri: "s3a://XXX/zeppelin_install.sh", Parameters: ¶metersBootstrapScripts1, Nodes: listNodesBootstrapScripts, ActiveMaster: &activeMasterBootstrapScripts1, FailAction: model.GetBootstrapScriptFailActionEnum().CONTINUE, BeforeComponentStart: &beforeComponentStartBootstrapScripts1, StartTime: &startTimeBootstrapScripts1, State: &stateBootstrapScripts1, ActionStages: &listActionStagesBootstrapScripts, }, } jarPathAddJobs:= "s3a://bigdata/program/hadoop-mapreduce-examples-2.7.2.jar" argumentsAddJobs:= "wordcount" inputAddJobs:= "s3a://bigdata/input/wd_1k/" outputAddJobs:= "s3a://bigdata/ouput/" jobLogAddJobs:= "s3a://bigdata/log/" hiveScriptPathAddJobs:= "" hqlAddJobs:= "" shutdownClusterAddJobs:= true fileActionAddJobs:= "" var listAddJobsbody = []model.AddJobsReqV11{ { JobType: int32(1), JobName: "tenji111", JarPath: &jarPathAddJobs, Arguments: &argumentsAddJobs, Input: &inputAddJobs, Output: &outputAddJobs, JobLog: &jobLogAddJobs, HiveScriptPath: &hiveScriptPathAddJobs, Hql: &hqlAddJobs, ShutdownCluster: &shutdownClusterAddJobs, SubmitJobOnceClusterRun: true, FileAction: &fileActionAddJobs, }, } var listComponentListbody = []model.ComponentAmbV11{ { ComponentName: "Hadoop", }, { ComponentName: "Spark", }, { ComponentName: "HBase", }, { ComponentName: "Hive", }, { ComponentName: "Presto", }, { ComponentName: "Tez", }, { ComponentName: "Hue", }, { ComponentName: "Loader", }, { ComponentName: "Flink", }, } loginModeCreateClusterReqV11:= model.GetCreateClusterReqV11LoginModeEnum().E_1 enterpriseProjectIdCreateClusterReqV11:= "0" logCollectionCreateClusterReqV11:= model.GetCreateClusterReqV11LogCollectionEnum().E_1 clusterTypeCreateClusterReqV11:= model.GetCreateClusterReqV11ClusterTypeEnum().E_0 clusterMasterSecretCreateClusterReqV11:= "" clusterAdminSecretCreateClusterReqV11:= "" securityGroupsIdCreateClusterReqV11:= "4820eace-66ad-4f2c-8d46-cf340e3029dd" request.Body = &model.CreateClusterReqV11{ NodeGroups: &listNodeGroupsbody, LoginMode: &loginModeCreateClusterReqV11, Tags: &listTagsbody, EnterpriseProjectId: &enterpriseProjectIdCreateClusterReqV11, LogCollection: &logCollectionCreateClusterReqV11, ClusterType: &clusterTypeCreateClusterReqV11, SafeMode: model.GetCreateClusterReqV11SafeModeEnum().E_0, ClusterMasterSecret: &clusterMasterSecretCreateClusterReqV11, ClusterAdminSecret: &clusterAdminSecretCreateClusterReqV11, BootstrapScripts: &listBootstrapScriptsbody, AddJobs: &listAddJobsbody, SecurityGroupsId: &securityGroupsIdCreateClusterReqV11, SubnetName: "subnet-4b44", SubnetId: "67984709-e15e-4e86-9886-d76712d4e00a", VpcId: "4a365717-67be-4f33-80c5-98e98a813af8", AvailableZoneId: "d573142f24894ef3bd3664de068b44b0", ComponentList: listComponentListbody, Vpc: "vpc-4b1c", DataCenter: "", BillingType: model.GetCreateClusterReqV11BillingTypeEnum().E_12, ClusterName: "mrs_HEbK", ClusterVersion: "MRS 3.1.0", } response, err := client.CreateCluster(request) if err == nil { fmt.Printf("%+v\n", response) } else { fmt.Println(err) } }
-
不使用node_groups参数组,创建一个关闭“集群高可用”功能、最小规格的集群,集群版本号为MRS 3.1.0。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177
package main import ( "fmt" "github.com/huaweicloud/huaweicloud-sdk-go-v3/core/auth/basic" mrs "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/mrs/v1" "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/mrs/v1/model" region "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/mrs/v1/region" ) func main() { // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security. // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment ak := os.Getenv("CLOUD_SDK_AK") sk := os.Getenv("CLOUD_SDK_SK") projectId := "{project_id}" auth := basic.NewCredentialsBuilder(). WithAk(ak). WithSk(sk). WithProjectId(projectId). Build() client := mrs.NewMrsClient( mrs.MrsClientBuilder(). WithRegion(region.ValueOf("<YOUR REGION>")). WithCredential(auth). Build()) request := &model.CreateClusterRequest{} var listTagsbody = []model.Tag{ { Key: "key1", Value: "value1", }, { Key: "key2", Value: "value2", }, } var listActionStagesBootstrapScripts = []model.BootstrapScriptActionStages{ model.GetBootstrapScriptActionStagesEnum().AFTER_SCALE_IN, model.GetBootstrapScriptActionStagesEnum().AFTER_SCALE_OUT, } var listNodesBootstrapScripts = []string{ "master", } parametersBootstrapScripts:= "" activeMasterBootstrapScripts:= false beforeComponentStartBootstrapScripts:= false startTimeBootstrapScripts:= int64(1667892101) stateBootstrapScripts:= model.GetBootstrapScriptStateEnum().IN_PROGRESS var listBootstrapScriptsbody = []model.BootstrapScript{ { Name: "Install zepplin", Uri: "s3a://XXX/zeppelin_install.sh", Parameters: ¶metersBootstrapScripts, Nodes: listNodesBootstrapScripts, ActiveMaster: &activeMasterBootstrapScripts, FailAction: model.GetBootstrapScriptFailActionEnum().CONTINUE, BeforeComponentStart: &beforeComponentStartBootstrapScripts, StartTime: &startTimeBootstrapScripts, State: &stateBootstrapScripts, ActionStages: &listActionStagesBootstrapScripts, }, } jarPathAddJobs:= "s3a://bigdata/program/hadoop-mapreduce-examples-XXX.jar" argumentsAddJobs:= "wordcount" inputAddJobs:= "s3a://bigdata/input/wd_1k/" outputAddJobs:= "s3a://bigdata/ouput/" jobLogAddJobs:= "s3a://bigdata/log/" hiveScriptPathAddJobs:= "" hqlAddJobs:= "" shutdownClusterAddJobs:= false fileActionAddJobs:= "" var listAddJobsbody = []model.AddJobsReqV11{ { JobType: int32(1), JobName: "tenji111", JarPath: &jarPathAddJobs, Arguments: &argumentsAddJobs, Input: &inputAddJobs, Output: &outputAddJobs, JobLog: &jobLogAddJobs, HiveScriptPath: &hiveScriptPathAddJobs, Hql: &hqlAddJobs, ShutdownCluster: &shutdownClusterAddJobs, SubmitJobOnceClusterRun: true, FileAction: &fileActionAddJobs, }, } var listComponentListbody = []model.ComponentAmbV11{ { ComponentName: "Hadoop", }, { ComponentName: "Spark", }, { ComponentName: "HBase", }, { ComponentName: "Hive", }, { ComponentName: "Presto", }, { ComponentName: "Tez", }, { ComponentName: "Hue", }, { ComponentName: "Loader", }, { ComponentName: "Flink", }, } loginModeCreateClusterReqV11:= model.GetCreateClusterReqV11LoginModeEnum().E_1 enterpriseProjectIdCreateClusterReqV11:= "0" logCollectionCreateClusterReqV11:= model.GetCreateClusterReqV11LogCollectionEnum().E_1 clusterTypeCreateClusterReqV11:= model.GetCreateClusterReqV11ClusterTypeEnum().E_0 clusterAdminSecretCreateClusterReqV11:= "******" nodePublicCertNameCreateClusterReqV11:= "SSHkey-bba1" coreDataVolumeCountCreateClusterReqV11:= int32(1) coreDataVolumeSizeCreateClusterReqV11:= int32(600) coreDataVolumeTypeCreateClusterReqV11:= model.GetCreateClusterReqV11CoreDataVolumeTypeEnum().SATA masterDataVolumeCountCreateClusterReqV11:= model.GetCreateClusterReqV11MasterDataVolumeCountEnum().E_1 masterDataVolumeSizeCreateClusterReqV11:= int32(600) masterDataVolumeTypeCreateClusterReqV11:= model.GetCreateClusterReqV11MasterDataVolumeTypeEnum().SATA securityGroupsIdCreateClusterReqV11:= "" coreNodeSizeCreateClusterReqV11:= "s1.xlarge.linux.bigdata" masterNodeSizeCreateClusterReqV11:= "s3.2xlarge.2.linux.bigdata" coreNodeNumCreateClusterReqV11:= int32(1) masterNodeNumCreateClusterReqV11:= int32(1) request.Body = &model.CreateClusterReqV11{ LoginMode: &loginModeCreateClusterReqV11, Tags: &listTagsbody, EnterpriseProjectId: &enterpriseProjectIdCreateClusterReqV11, LogCollection: &logCollectionCreateClusterReqV11, ClusterType: &clusterTypeCreateClusterReqV11, SafeMode: model.GetCreateClusterReqV11SafeModeEnum().E_0, ClusterAdminSecret: &clusterAdminSecretCreateClusterReqV11, NodePublicCertName: &nodePublicCertNameCreateClusterReqV11, BootstrapScripts: &listBootstrapScriptsbody, CoreDataVolumeCount: &coreDataVolumeCountCreateClusterReqV11, CoreDataVolumeSize: &coreDataVolumeSizeCreateClusterReqV11, CoreDataVolumeType: &coreDataVolumeTypeCreateClusterReqV11, MasterDataVolumeCount: &masterDataVolumeCountCreateClusterReqV11, MasterDataVolumeSize: &masterDataVolumeSizeCreateClusterReqV11, MasterDataVolumeType: &masterDataVolumeTypeCreateClusterReqV11, AddJobs: &listAddJobsbody, SecurityGroupsId: &securityGroupsIdCreateClusterReqV11, SubnetName: "subnet", SubnetId: "815bece0-fd22-4b65-8a6e-15788c99ee43", VpcId: "5b7db34d-3534-4a6e-ac94-023cd36aaf74", AvailableZoneId: "d573142f24894ef3bd3664de068b44b0", ComponentList: listComponentListbody, CoreNodeSize: &coreNodeSizeCreateClusterReqV11, MasterNodeSize: &masterNodeSizeCreateClusterReqV11, Vpc: "vpc1", DataCenter: "", BillingType: model.GetCreateClusterReqV11BillingTypeEnum().E_12, CoreNodeNum: &coreNodeNumCreateClusterReqV11, MasterNodeNum: &masterNodeNumCreateClusterReqV11, ClusterName: "newcluster", ClusterVersion: "MRS 3.1.0", } response, err := client.CreateCluster(request) if err == nil { fmt.Printf("%+v\n", response) } else { fmt.Println(err) } }
更多编程语言的SDK代码示例,请参见API Explorer的代码示例页签,可生成自动对应的SDK代码示例。
状态码
状态码 |
描述 |
---|---|
200 |
创建集群成功。 |
错误码
请参见错误码。