更新时间:2024-12-12 GMT+08:00
分享

创建集群

功能介绍

创建一个MRS集群。使用接口前,您需要先获取下的资源信息。

  • 通过VPC创建或查询VPC、子网

  • 通过ECS创建或查询密钥对

  • 通过终端节点获取区域信息

  • 参考MRS服务支持的组件获取MRS版本及对应版本支持的组件信息

接口约束

调用方法

请参见如何调用API

URI

POST /v2/{project_id}/clusters

表1 路径参数

参数

是否必选

参数类型

描述

project_id

String

参数解释:

项目编号。获取方法,请参见获取项目ID

约束限制:

不涉及

取值范围:

只能由英文字母和数字组成,且长度为[1-64]个字符。

默认取值:

不涉及

请求参数

表2 请求Body参数

参数

是否必选

参数类型

描述

is_dec_project

Boolean

参数解释:

说明是否为专属云的资源。

约束限制:

不涉及

取值范围:

  • true:是专属云的资源。

  • false:不是专属云的资源。

默认取值:

false

cluster_version

String

参数解释:

集群版本。例如:MRS 3.1.0。

约束限制:

不涉及

取值范围:

不涉及

默认取值:

不涉及

cluster_name

String

参数解释:

集群名称。

约束限制:

不涉及

取值范围:

  • 不允许相同。

  • 只能由英文字母、数字以及“_”和“-”组成,且长度为[1-64]个字符。

默认取值:

不涉及

cluster_type

String

参数解释:

集群类型。

约束限制:

不涉及

取值范围:

  • ANALYSIS:分析集群

  • STREAMING:流式集群

  • MIXED:混合集群

  • CUSTOM:自定义集群,仅MRS 3.x版本支持。

默认取值:

不涉及

charge_info

ChargeInfo object

参数解释:

计费类型信息。

约束限制:

不涉及

取值范围:

不涉及

默认取值:

不涉及

region

String

参数解释:

集群所在区域信息,请参见终端节点

约束限制:

不涉及

取值范围:

不涉及

默认取值:

不涉及

vpc_name

String

参数解释:

子网所在VPC名称。通过VPC管理控制台获取名称:

  1. 登录VPC管理控制台。

  2. 单击“虚拟私有云”,从左侧列表选择虚拟私有云。

在“虚拟私有云”页面的列表中即可获取VPC名称。

约束限制:

不涉及

取值范围:

不涉及

默认取值:

不涉及

subnet_id

String

参数解释:

子网ID。通过VPC管理控制台获取子网ID:

  1. 登录VPC管理控制台。

  2. 单击“虚拟私有云”,从左侧列表选择虚拟私有云。

  3. 单击对应虚拟私有云所在行的“子网个数”查看子网。

  4. 单击对应子网名称,获取“网络ID”。

约束限制:

“subnet_id”和“subnet_name”必须至少填写一个,当这两个参数同时配置但是不匹配同一个子网时,集群会创建失败,请仔细填写参数。推荐使用“subnet_id”。

取值范围:

不涉及

默认取值:

不涉及

subnet_name

String

参数解释:

子网名称。通过VPC管理控制台获取子网名称:

  1. 登录管理控制台。

  2. 单击“虚拟私有云”,从左侧列表选择虚拟私有云。

  3. 单击对应虚拟私有云所在行的“子网个数”查看子网,获取子网名称。

约束限制:

“subnet_id”和“subnet_name”必须至少填写一个,当这两个参数同时配置但是不匹配同一个子网时,集群会创建失败,请仔细填写参数。当仅填写“subnet_name”一个参数且VPC下存在同名子网时,创建集群时以VPC平台第一个名称的子网为准。推荐使用“subnet_id”。

取值范围:

不涉及

默认取值:

不涉及

components

String

参数解释:

组件名称列表,用逗号分隔。支持的组件请参见获取MRS集群信息页面的“MRS服务支持的组件”内容。

约束限制:

不涉及

取值范围:

不涉及

默认取值:

不涉及

external_datasources

Array of ClusterDataConnectorMap objects

参数解释:

部署Hive和Ranger等组件时,可以关联数据连接,将元数据存储于关联的数据库。

约束限制:

不涉及

取值范围:

不涉及

默认取值:

不涉及

availability_zone

String

参数解释:

可用分区名称,不支持多AZ集群。可用分区信息请参见终端节点

约束限制:

不涉及

取值范围:

不涉及

默认取值:

不涉及

security_groups_id

String

参数解释:

集群安全组的ID。

  • 当该ID为空时MRS后台会自动创建安全组,自动创建的安全组名称以mrs_{cluster_name}开头。

  • 当该ID不为空时,表示使用固定安全组来创建集群,传入的ID必须是当前租户中包含的安全组ID。

  • 支持多个安全组ID,以逗号分隔。

约束限制:

不涉及

取值范围:

不涉及

默认取值:

不涉及

auto_create_default_security_group

Boolean

参数解释:

是否要创建MRS集群默认安全组。

约束限制:

当指定该参数为true,则无论“security_groups_id”参数是否指定,都会为集群创建默认安全组。

取值范围:

  • true:创建MRS集群默认安全组。

  • false:不创建MRS集群默认安全组。

默认取值:

false

safe_mode

String

参数解释:

MRS集群运行模式。

约束限制:

不涉及

取值范围:

  • SIMPLE:普通集群,表示Kerberos认证关闭,用户可使用集群提供的所有功能。

  • KERBEROS:安全集群,表示Kerberos认证开启,普通用户无权限使用MRS集群的“文件管理”和“作业管理”功能,并且无法查看Hadoop、Spark的作业记录以及集群资源使用情况。如果需要使用集群更多功能,需要找Manager的管理员分配权限。

默认取值:

不涉及

manager_admin_password

String

参数解释:

配置Manager管理员用户的密码。

约束限制:

不涉及

取值范围:

  • 密码长度应在8-26个字符之间。

  • 至少包含四种字符组合,如大写字母,小写字母,数字,特殊字符(!@$%^-_=+[{}]:,./?),但不能包含空格。

  • 不能与用户名或者倒序用户名相同。

默认取值:

不涉及

login_mode

String

参数解释:

节点登录方式。

约束限制:

不涉及

取值范围:

  • PASSWORD:密码登录,选择此项时,node_root_password不能为空。

  • KEYPAIR:密钥对登录,选择此项时,node_keypair_name不能为空。

默认取值:

不涉及

node_root_password

String

参数解释:

配置访问集群节点的root密码。

约束限制:

不涉及

取值范围:

密码设置约束如下:

  • 字符串类型,可输入的字符串长度为8-26。

  • 至少包含四种字符组合,如大写字母,小写字母,数字,特殊字符(!@$%^-_=+[{}]:,./?),但不能包含空格。

  • 不能与用户名或者倒序用户名相同。

默认取值:

不涉及

node_keypair_name

String

参数解释:

密钥对名称。用户可以使用密钥对方式登录集群节点。

约束限制:

不涉及

取值范围:

不涉及

默认取值:

不涉及

enterprise_project_id

String

参数解释:

企业项目ID。创建集群时,给集群绑定企业项目ID。获取方式请参见《企业管理API参考》的“查询企业项目列表”响应消息表“enterprise_project字段数据结构说明”的“id”,即表5 enterprise_project_list字段数据结构说明

约束限制:

不涉及

取值范围:

不涉及

默认取值:

默认设置为0,表示为default企业项目。

eip_address

String

参数解释:

与MRS集群绑定的弹性公网IP,可实现使用弹性公网IP访问Manager的目的。该弹性公网IP必须已经创建且与集群在同一区域。

约束限制:

不涉及

取值范围:

不涉及

默认取值:

不涉及

eip_id

String

参数解释:

当“eip_address”配置时,该参数必须配置,用于表示绑定的弹性公网IP的ID。可通过在VPC服务的“网络控制台 > 弹性公网IP和带宽 > 弹性公网IP”页面单击待绑定的弹性公网IP,在基本信息中获取“ID”。

约束限制:

不涉及

取值范围:

不涉及

默认取值:

不涉及

mrs_ecs_default_agency

String

参数解释:

集群节点默认绑定的委托名称,固定为MRS_ECS_DEFAULT_AGENCY。通过绑定委托,您可以将部分资源共享给ECS或BMS云服务来管理,例如通过配置ECS委托可自动获取AK/SK访问OBS。MRS_ECS_DEFAULT_AGENCY委托拥有对象存储服务的OBS OperateAccess权限和在集群所在区域拥有CES FullAccess(对开启细粒度策略的用户)、CES Administrator和KMS Administrator权限。

约束限制:

不涉及

取值范围:

不涉及

默认取值:

不涉及

template_id

String

参数解释:

当集群类型为CUSTOM时,用于指定节点部署所使用的模板。

  • mgmt_control_combined_v2:管控合设模板,管理角色和控制角色共同部署在Master节点中,数据实例合设在同一节点组。该部署方式适用于100个以下的节点,可以减少成本。

  • mgmt_control_separated_v2:管控分设模板,管理角色和控制角色分别部署在不同的Master节点中,数据实例合设在同一节点组。该部署方式适用于100-500个节点,在高并发负载情况下表现更好。

  • mgmt_control_data_separated_v2:数据分设模板,管理角色和控制角色分别部署在不同的Master节点中,数据实例分设在不同节点组。该部署方式适用于500个以上的节点,可以将各组件进一步分开部署,适用于更大的集群规模。

约束限制:

不涉及

取值范围:

不涉及

默认取值:

不涉及

tags

Array of Tag objects

参数解释:

集群的标签信息。

约束限制:

同一个集群最多能使用10个tag,tag的名称(key)不能重复。

取值范围:

不涉及

默认取值:

不涉及

log_collection

Integer

参数解释:

集群创建失败时,是否收集失败日志。

约束限制:

不涉及

取值范围:

  • 0:不创建OBS桶仅用于MRS集群创建失败时的日志收集。

  • 1:创建OBS桶仅用于MRS集群创建失败时的日志收集。

默认取值:

1

node_groups

Array of NodeGroupV2 objects

参数解释:

组成集群的节点组信息。

约束限制:

不涉及

取值范围:

不涉及

默认取值:

不涉及

bootstrap_scripts

Array of BootstrapScript objects

参数解释:

配置引导操作脚本信息。

约束限制:

不涉及

取值范围:

不涉及

默认取值:

不涉及

add_jobs

Array of AddJobsReqV11 objects

参数解释:

创建集群时可同时提交作业,当前仅MRS1.8.7之前版本支持,暂时只支持新增一个作业。建议使用创建集群并提交作业接口RunJobFlow的steps参数。

约束限制:

不能超过1条。

取值范围:

不涉及

默认取值:

不涉及

log_uri

String

参数解释:

集群日志转储至OBS的具体路径。开启日志转储功能后,日志上传需要对应OBS路径的读写权限,请配置MRS_ECS_DEFULT_AGENCY默认委托或具有对应OBS路径读写权限的自定义委托。具体请参见配置存算分离集群(委托方式)。该参数只适用于支持“集群日志转储OBS”特性的集群版本。

约束限制:

不涉及

取值范围:

不涉及

默认取值:

不涉及

component_configs

Array of ComponentConfig objects

参数解释:

集群组件自定义配置。该参数只适用于支持“自定义组件配置创建集群”特性的集群版本。

约束限制:

不能超过50条。

取值范围:

不涉及

默认取值:

不涉及

smn_notify

SmnNotify object

参数解释:

smn告警订阅。

约束限制:

不涉及

取值范围:

不涉及

默认取值:

不涉及

表3 ClusterDataConnectorMap

参数

是否必选

参数类型

描述

map_id

Integer

参数解释:

数据连接关联ID值。

约束限制:

不涉及

取值范围:

不涉及

默认取值:

不涉及

connector_id

String

参数解释:

数据连接ID值。

约束限制:

不涉及

取值范围:

不涉及

默认取值:

不涉及

component_name

String

参数解释:

组件名。

约束限制:

不涉及

取值范围:

不涉及

默认取值:

不涉及

role_type

String

参数解释:

组件角色类型。

约束限制:

不涉及

取值范围:

  • hive_metastore:Hive Metastore角色

  • hive_data:Hive角色

  • hbase_data:Hbase角色

  • ranger_data:Ranger角色

默认取值:

不涉及

source_type

String

参数解释:

数据连接类型。

约束限制:

不涉及

取值范围:

  • LOCAL_DB:本地元数据

  • RDS_POSTGRES:RDS服务PostgreSQL数据库

  • RDS_MYSQL:RDS服务MySQL数据库

  • gaussdb-mysql:云数据库GaussDB(for MySQL)

默认取值:

不涉及

cluster_id

String

参数解释:

关联集群ID。如果指定集群ID,则获取该集群做过补丁更新的最新版本元数据。获取方法,请参见获取集群ID

约束限制:

不涉及

取值范围:

只能由英文字母、数字以及“_”和“-”组成,且长度为[1-64]个字符。

默认取值:

不涉及

status

Integer

参数解释:

数据连接状态。

约束限制:

不涉及

取值范围:

  • 0:代表正常状态

  • 1:代表使用中

默认取值:

不涉及

表4 Tag

参数

是否必选

参数类型

描述

key

String

参数解释:

标签的键。

约束限制:

不涉及

取值范围:

  • 标签的key值可以包含任意语种字母、数字、空格和_.:=+-@,但首尾不能含有空格,不能以_sys_开头。

  • 同一资源的key值不能重复。

  • 最大长度128个unicode字符,不能为空字符串。

默认取值:

不涉及

value

String

参数解释:

标签的值。

约束限制:

不涉及

取值范围:

  • 标签的value值可以包含任意语种字母、数字、空格和_.:=+-@,但首尾不能含有空格,不能以_sys_开头。

  • 最大长度255个unicode字符,可以为空字符串。

默认取值:

不涉及

表5 NodeGroupV2

参数

是否必选

参数类型

描述

group_name

String

参数解释:

节点组名称。

约束限制:

不涉及

取值范围:

只能由英文字母、数字以及“_”组成,且长度为[1-64]个字符。

节点组配置原则如下:

  • master_node_default_group:Master节点组,所有集群类型均需包含该节点组。

  • core_node_analysis_group:分析Core节点组,分析集群、混合集群均需包含该节点组。

  • core_node_streaming_group:流式Core节点组,流式集群和混合集群均需包含该节点组。

  • task_node_analysis_group:分析Task节点组,分析集群和混合集群可根据需要选择该节点组。

  • task_node_streaming_group:流式Task节点组,流式集群、混合集群可根据需要选择该节点组。

  • node_group{x}:自定义集群节点组,可根据需要添加多个,最多支持添加9个该节点组。

默认取值:

不涉及

node_num

Integer

参数解释:

节点数量。

约束限制:

Core与Task节点总数最大为500个。

取值范围:

0-500

默认取值:

不涉及

node_size

String

参数解释:

节点的实例规格,例如:{ECS_FLAVOR_NAME}.linux.bigdata,{ECS_FLAVOR_NAME}可以为c3.4xlare.2等在MRS购买页可见的云服务器规格。实例规格详细说明请参见MRS所使用的弹性云服务器规格MRS所使用的裸金属服务器规格。该参数建议从MRS控制台的集群创建页面获取对应区域对应版本所支持的规格。

约束限制:

不涉及

取值范围:

不涉及

默认取值:

不涉及

root_volume

Volume object

参数解释:

节点系统盘信息,部分虚拟机或BMS自带系统盘的情况该参数可选,其他情况该参数必选。

约束限制:

不涉及

取值范围:

不涉及

默认取值:

不涉及

data_volume

Volume object

参数解释:

节点数据盘信息。

约束限制:

当data_volume_count不为0时,该参数必选。

取值范围:

不涉及

默认取值:

不涉及

data_volume_count

Integer

参数解释:

节点数据磁盘存储数目。

约束限制:

不涉及

取值范围:

0-20

默认取值:

不涉及

charge_info

ChargeInfo object

参数解释:

节点组的计费类型,Master和Core节点组是和集群的计费类型一致,Task节点组可以和集群的计费类型不同。

约束限制:

不涉及

取值范围:

不涉及

默认取值:

不涉及

auto_scaling_policy

AutoScalingPolicy object

参数解释:

弹性伸缩规则信息。

约束限制:

不涉及

取值范围:

不涉及

默认取值:

不涉及

assigned_roles

Array of strings

参数解释:

当集群类型为CUSTOM时,该参数必选。可以指定节点组中部署的角色,该参数是一个字符串数组,每个字符串表示一个角色表达式。

角色表达式定义:

  • 当该角色在节点组所有节点部署时: {role name},如“DataNode”。

  • 当该角色在节点组指定下标节点部署时:{role name}:{index1},{index2}…,{indexN},如“NameNode:1,2”,下标从1开始计数。

可选的角色请参考MRS支持的角色与组件对应表

约束限制:

不涉及

取值范围:

不涉及

默认取值:

不涉及

表6 Volume

参数

是否必选

参数类型

描述

type

String

参数解释:

磁盘类型。

约束限制:

不涉及

取值范围:

  • SATA:普通IO磁盘类型。

  • SAS:高IO磁盘类型。

  • SSD:超高IO磁盘类型。

  • GPSSD:通用型SSD磁盘类型

默认取值:

不涉及

size

Integer

参数解释:

数据盘大小,容量单位为GB。

约束限制:

不涉及

取值范围:

10-32768

默认取值:

不涉及

表7 ChargeInfo

参数

是否必选

参数类型

描述

charge_mode

String

参数解释:

计费模式。

约束限制:

不涉及

取值范围:

  • prePaid:预付费,即包年/包月。(创建集群接口现已支持预付费,创建集群并提交作业接口暂不支持预付费。)

  • postPaid:后付费,即按需计费。

默认取值:

不涉及

period_type

String

参数解释:

周期类型。

约束限制:

不涉及

取值范围:

  • month:包月。

  • year: 包年。

  • day:按需计费。

默认取值:

不涉及

period_num

Integer

参数解释:

周期数。

约束限制:

“charge_mode”为“prePaid”时生效,且为必选值,指定订购的时间。

取值范围:

  • 当“period_type”为“month”时,取值为1-9。

  • 当“period_type”为“year”时,取值为1-3。

默认取值:

不涉及

is_auto_pay

Boolean

参数解释:

是否自动支付,包周期模式下使用,下单订购后,是否自动从客户的账户中支付,而不需要客户手动去进行支付。

约束限制:

不涉及

取值范围:

  • true:自动支付,会自动选择折扣和优惠券进行优惠,然后自动从客户账户中支付,自动支付失败后会生成订单成功、但订单状态为“待支付”,等待客户手动支付。

  • false:手动支付,需要客户手动去支付,客户可以选择折扣和优惠券。

默认取值:

false

表8 AutoScalingPolicy

参数

是否必选

参数类型

描述

auto_scaling_enable

Boolean

参数解释:

当前自动伸缩规则是否开启。

约束限制:

不涉及

取值范围:

  • true:开启自动伸缩规则

  • false:不开启自动伸缩规则

默认取值:

不涉及

min_capacity

Integer

参数解释:

指定该节点组的最小保留节点数。

约束限制:

不涉及

取值范围:

0-500

默认取值:

不涉及

max_capacity

Integer

参数解释:

指定该节点组的最大节点数。

约束限制:

不涉及

取值范围:

0-500

默认取值:

不涉及

resources_plans

Array of ResourcesPlan objects

参数解释:

资源计划列表。若该参数为空表示不启用资源计划。

约束限制:

当启用弹性伸缩时,资源计划与自动伸缩规则需至少配置其中一种。不能超过5条。

取值范围:

不涉及

默认取值:

不涉及

rules

Array of Rule objects

参数解释:

自动伸缩的规则列表。

约束限制:

当启用弹性伸缩时,资源计划与自动伸缩规则需至少配置其中一种。不能超过10条。

取值范围:

不涉及

默认取值:

不涉及

exec_scripts

Array of ScaleScript objects

参数解释:

弹性伸缩自定义自动化脚本列表。若该参数为空表示不启用自动化脚本。在V2弹性伸缩策略创建和更新接口中暂时不支持该字段。

约束限制:

不能超过10条。

取值范围:

不涉及

默认取值:

不涉及

表9 ResourcesPlan

参数

是否必选

参数类型

描述

period_type

String

参数解释:

资源计划的周期类型,当前只允许以下类型:daily。

约束限制:

不涉及

取值范围:

不涉及

默认取值:

不涉及

start_time

String

参数解释:

资源计划的起始时间,格式为“hour:minute”,表示时间在0:00-23:59之间。

约束限制:

不涉及

取值范围:

不涉及

默认取值:

不涉及

end_time

String

参数解释:

资源计划的结束时间,格式与“start_time”相同。

约束限制:

不早于start_time表示的时间,且与start_time间隔不小于30min。

取值范围:

不涉及

默认取值:

不涉及

min_capacity

Integer

参数解释:

资源计划内该节点组的最小保留节点数。

约束限制:

不涉及

取值范围:

0-500

默认取值:

不涉及

max_capacity

Integer

参数解释:

资源计划内该节点组的最大保留节点数。

约束限制:

不涉及

取值范围:

0-500

默认取值:

不涉及

effective_days

Array of strings

参数解释:

资源计划的生效日期,为空时代表每日,另外也可为以下返回值:

MONDAY(周一)、TUESDAY(周二)、WEDNESDAY(周三)、THURSDAY(周四)、FRIDAY(周五)、SATURDAY(周六)、SUNDAY(周日)

约束限制:

不涉及

取值范围:

不涉及

默认取值:

不涉及

表10 Rule

参数

是否必选

参数类型

描述

name

String

参数解释:

弹性伸缩规则的名称。

约束限制:

不涉及

取值范围:

只能由英文字母、数字以及“_”和“-”组成,且长度为[1-64]个字符。

在一个节点组范围内,不允许重名。

默认取值:

不涉及

description

String

参数解释:

弹性伸缩规则的说明。

约束限制:

不涉及

取值范围:

长度为[0-1024]个字符。

默认取值:

不涉及

adjustment_type

String

参数解释:

弹性伸缩规则的调整类型。

约束限制:

不涉及

取值范围:

  • scale_out:扩容

  • scale_in:缩容

默认取值:

不涉及

cool_down_minutes

Integer

参数解释:

触发弹性伸缩规则后,该集群处于冷却状态(不再执行弹性伸缩操作)的时长,单位为分钟。

约束限制:

不涉及

取值范围:

0-10080。10080为一周的分钟数。

默认取值:

不涉及

scaling_adjustment

Integer

参数解释:

单次调整集群节点的个数。

约束限制:

不涉及

取值范围:

1-100

默认取值:

不涉及

trigger

Trigger object

参数解释:

描述该规则触发条件。

约束限制:

不涉及

取值范围:

不涉及

默认取值:

不涉及

表11 Trigger

参数

是否必选

参数类型

描述

metric_name

String

参数解释:

指标名称。该触发条件会依据该名称对应指标的值来进行判断。详细指标名称内容请参见"弹性伸缩指标列表"

约束限制:

不涉及

取值范围:

取值范围请参见"弹性伸缩指标列表"

默认取值:

不涉及

metric_value

String

参数解释:

指标阈值。触发该条件的指标阈值,只允许输入整数或者带两位小数的数。

约束限制:

不涉及

取值范围:

只允许输入整数或者带两位小数的数。

默认取值:

不涉及

comparison_operator

String

参数解释:

指标判断逻辑运算符。

约束限制:

不涉及

取值范围:

  • LT:小于

  • GT:大于

  • LTOE:小于等于

  • GTOE:大于等于

默认取值:

不涉及

evaluation_periods

Integer

参数解释:

判断连续满足指标阈值的周期数(一个周期为5分钟)。

约束限制:

不涉及

取值范围:

1-288

默认取值:

不涉及

表12 ScaleScript

参数

是否必选

参数类型

描述

name

String

参数解释:

弹性伸缩自定义自动化脚本的名称。

约束限制:

不涉及

取值范围:

同一个集群的自定义自动化脚本名称不允许相同。

只能由英文字母、数字、空格以及“_”和“-”组成,不能以空格开头,且长度为[1-64]个字符。

默认取值:

不涉及

uri

String

参数解释:

自定义自动化脚本的路径。设置为OBS桶的路径或虚拟机本地的路径。

  • OBS桶的路径:直接手动输入脚本路径。示例:obs://XXX/scale.sh

  • 虚拟机本地的路径:用户需要输入正确的脚本路径。脚本所在的路径必须以‘/’开头,以.sh结尾。

约束限制:

不涉及

取值范围:

不涉及

默认取值:

不涉及

parameters

String

参数解释:

自定义自动化脚本参数。多个参数间用空格隔开。

可以传入以下系统预定义参数:

  • ${mrs_scale_node_num}:扩缩容节点数

  • ${mrs_scale_type}:扩缩容类型,扩容为scale_out,缩容为scale_in

  • ${mrs_scale_node_hostnames}:扩缩容的节点主机名称

  • ${mrs_scale_node_ips}:扩缩容的节点IP

  • ${mrs_scale_rule_name}:触发扩缩容的规则名

其他用户自定义参数使用方式与普通shell脚本相同,多个参数中间用空格隔开。

约束限制:

不涉及

取值范围:

不涉及

默认取值:

不涉及

nodes

Array of strings

参数解释:

自定义自动化脚本所执行的节点组名称。

约束限制:

不涉及

取值范围:

不涉及

默认取值:

不涉及

active_master

Boolean

参数解释:

自定义自动化脚本是否只运行在主Master节点上。

约束限制:

不涉及

取值范围:

  • true:自定义自动化脚本只运行在主Master节点上。

  • false:自定义自动化脚本可运行在所有Master节点上。

默认取值:

false

fail_action

String

参数解释:

自定义自动化脚本执行失败后,是否继续执行后续脚本和创建集群。建议您在调试阶段设置为“continue”,无论此自定义自动化脚本是否执行成功,则集群都能继续安装和启动。

约束限制:

由于缩容成功无法回滚,因此缩容后执行的脚本“fail_action”必须设置为“continue”。

取值范围:

  • continue:继续执行后续脚本。

  • errorout:终止操作。

默认取值:

continue

action_stage

String

参数解释:

脚本执行时机。

约束限制:

不涉及

取值范围:

  • before_scale_out:扩容前

  • before_scale_in:缩容前

  • after_scale_out:扩容后

  • after_scale_in:缩容后

默认取值:

不涉及

表13 BootstrapScript

参数

是否必选

参数类型

描述

name

String

参数解释:

引导操作脚本的名称。

约束限制:

不涉及

取值范围:

同一个集群的引导操作脚本名称不允许相同。

只能由英文字母、数字、空格以及“_”和“-”组成,不能以空格开头,且长度为[1-64]个字符。

默认取值:

不涉及

uri

String

参数解释:

引导操作脚本的路径。设置为OBS桶的路径或虚拟机本地的路径。

OBS桶的路径:直接手动输入脚本路径。例如输入MRS提供的公共样例脚本路径。示例:obs://bootstrap/presto/presto-install.sh,其中安装dualroles时,presto-install.sh脚本参数为dualroles, 安装worker时,presto-install.sh脚本参数为worker。根据Presto使用习惯,建议您在Active Master节点上安装dualroles,在Core节点上安装worker。

虚拟机本地的路径:用户需要输入正确的脚本路径。脚本所在的路径必须以‘/’开头,以.sh结尾。

约束限制:

不涉及

取值范围:

不涉及

默认取值:

不涉及

parameters

String

参数解释:

引导操作脚本参数。

约束限制:

不涉及

取值范围:

不涉及

默认取值:

不涉及

nodes

Array of strings

参数解释:

引导操作脚本所执行的节点组名称。

约束限制:

不涉及

取值范围:

不涉及

默认取值:

不涉及

active_master

Boolean

参数解释:

引导操作脚本是否只运行在主Master节点上。

约束限制:

不涉及

取值范围:

  • true:引导操作脚本只运行在主Master节点上。

  • false:引导操作脚本可运行在所有Master节点上。

默认取值:

不涉及

fail_action

String

参数解释:

引导操作脚本执行失败后,是否继续执行后续脚本和创建集群。建议您在调试阶段设置为“继续”,无论此引导操作是否执行成功,则集群都能继续安装和启动。

约束限制:

不涉及

取值范围:

  • continue:继续执行后续脚本。

  • errorout:终止操作。

默认取值:

continue

before_component_start

Boolean

参数解释:

引导操作脚本执行的时间。目前支持“组件启动前”和“组件启动后”两种类型。

约束限制:

不涉及

取值范围:

  • true:引导操作脚本在组件启动前执行。

  • false:引导操作脚本在组件启动后执行。

默认取值:

false

start_time

Long

参数解释:

单个引导操作脚本的执行时间。

约束限制:

不涉及

取值范围:

不涉及

默认取值:

不涉及

state

String

参数解释:

单个引导操作脚本的运行状态。

约束限制:

不涉及

取值范围:

  • PENDING:挂起

  • IN_PROGRESS:处理中

  • SUCCESS:处理成功

  • FAILURE:处理失败

默认取值:

不涉及

action_stages

Array of strings

参数解释:

选择引导操作脚本执行的时间。

约束限制:

参数枚举值:

  • BEFORE_COMPONENT_FIRST_START: 组件首次启动前

  • AFTER_COMPONENT_FIRST_START: 组件首次启动后

  • BEFORE_SCALE_IN: 缩容前

  • AFTER_SCALE_IN: 缩容后

  • BEFORE_SCALE_OUT: 扩容前

  • AFTER_SCALE_OUT: 扩容后

取值范围:

不涉及

默认取值:

不涉及

表14 AddJobsReqV11

参数

是否必选

参数类型

描述

job_type

Integer

参数解释:

作业类型码。

约束限制:

不涉及

取值范围:

  • MapReduce

  • Spark

  • Hive Script

  • HiveSQL(当前不支持)

  • DistCp,导入、导出数据,(当前不支持)。

  • Spark Script

  • Spark SQL,提交SQL语句,(当前不支持)。

默认取值:

不涉及

job_name

String

参数解释:

作业名称。

约束限制:

不涉及

取值范围:

只能由英文字母、数字以及“_”和“-”组成,且长度为[1-64]个字符。

不同作业的名称允许相同,但不建议设置相同。

默认取值:

不涉及

jar_path

String

参数解释:

执行程序Jar包或sql文件地址。

约束限制:

不涉及

取值范围:

  • 最多为1023字符,不能包含;|&>,<'$特殊字符,且不可为空或全空格。

  • 文件可存储于HDFS或者OBS中,不同的文件系统对应的路径存在差异。

    • OBS:以“obs://”开头。不支持KMS加密的文件或程序。

    • HDFS:以“/”开头。

  • Spark Script需要以“.sql”结尾,MapReduce和Spark Jar需要以“.jar”结尾,sql和jar不区分大小写。

默认取值:

不涉及

arguments

String

参数解释:

程序执行的关键参数,该参数由用户程序内的函数指定,MRS只负责参数的传入。

约束限制:

不涉及

取值范围:

最多为150000字符,不能包含;|&>'<$特殊字符,可为空。

默认取值:

不涉及

input

String

参数解释:

数据输入地址。文件可存储于HDFS或者OBS中,不同的文件系统对应的路径存在差异。

  • OBS:以“obs://”开头。不支持KMS加密的文件或程序。

  • HDFS:以“/”开头。

约束限制:

不涉及

取值范围:

最多为1023字符,不能包含;|&>'<$特殊字符,可为空。

默认取值:

不涉及

output

String

参数解释:

数据输出地址。文件可存储于HDFS或者OBS中,不同的文件系统对应的路径存在差异。如果该路径不存在,系统会自动创建。

  • OBS:以“obs://”开头。

  • HDFS:以“/”开头。

约束限制:

不涉及

取值范围:

最多为1023字符,不能包含;|&>'<$特殊字符,可为空。

默认取值:

不涉及

job_log

String

参数解释:

作业日志存储地址,该日志信息记录作业运行状态。文件可存储于HDFS或者OBS中,不同的文件系统对应的路径存在差异。

  • OBS:以“obs://”开头。

  • HDFS:以“/”开头。

约束限制:

不涉及

取值范围:

最多为1023字符,不能包含;|&>'<$特殊字符,可为空。

默认取值:

不涉及

hive_script_path

String

参数解释:

sql程序路径,仅Spark Script和Hive Script作业需要使用此参数。

约束限制:

不涉及

取值范围:

  • 最多为1023字符,不能包含;|&><'$特殊字符,且不可为空或全空格。

  • 文件可存储于HDFS或者OBS中,不同的文件系统对应的路径存在差异。

    • OBS:以“obs://”开头。不支持KMS加密的文件或程序。

    • HDFS:以“/”开头。

  • 需要以“.sql”结尾,sql不区分大小写。

默认取值:

不涉及

hql

String

参数解释:

HQL脚本语句。

约束限制:

不涉及

取值范围:

不涉及

默认取值:

不涉及

shutdown_cluster

Boolean

参数解释:

作业执行完成后,是否删除集群。

约束限制:

不涉及

取值范围:

  • true:删除集群

  • false:不删除集群

默认取值:

不涉及

submit_job_once_cluster_run

Boolean

参数解释:

创建集群时是否同时提交作业。此处应设置为true。

约束限制:

不涉及

取值范围:

  • true:创建集群同时提交作业

  • false:单独提交作业

默认取值:

不涉及

file_action

String

参数解释:

数据导入导出。

约束限制:

不涉及

取值范围:

  • import:导入数据

  • export:导出数据

默认取值:

不涉及

表15 ComponentConfig

参数

是否必选

参数类型

描述

component_name

String

参数解释:

组件名称。

约束限制:

不涉及

取值范围:

不涉及

默认取值:

不涉及

configs

Array of Config objects

参数解释:

组件配置项列表。

约束限制:

不能超过100条。

取值范围:

不涉及

默认取值:

不涉及

表16 Config

参数

是否必选

参数类型

描述

key

String

参数解释:

配置名,仅支持MRS组件配置页面上所展示的配置名。

约束限制:

不涉及

取值范围:

不涉及

默认取值:

不涉及

value

String

参数解释:

配置值。

约束限制:

不涉及

取值范围:

不涉及

默认取值:

不涉及

config_file_name

String

参数解释:

配置文件名,仅支持MRS组件配置页面上所展示的文件名。

约束限制:

不涉及

取值范围:

不涉及

默认取值:

不涉及

表17 SmnNotify

参数

是否必选

参数类型

描述

topic_urn

String

参数解释:

SMN消息通知服务的主题urn。

约束限制:

如果需要开启告警订阅,则必填。

取值范围:

不涉及

默认取值:

不涉及

subscription_name

String

参数解释:

该订阅规则名称。

约束限制:

不涉及

取值范围:

不涉及

默认取值:

default_alert_rule

响应参数

状态码: 200

表18 响应Body参数

参数

参数类型

描述

cluster_id

String

参数解释:

集群创建成功后系统返回的集群ID值。

约束限制:

不涉及

取值范围:

不涉及

默认取值:

不涉及

请求示例

  • 创建一个分析集群,集群版本号为MRS 3.1.0。包含一个Master节点组,节点数为2;一个Core节点组,节点数为3;一个Task节点组,节点数为3。每周一的12点至13点开启弹性伸缩。Hive组件的初始配置hive.union.data.type.incompatible.enable修改为true,dfs.replication修改为4。

    POST /v2/{project_id}/clusters
    
    {
      "cluster_version" : "MRS 3.1.0",
      "cluster_name" : "mrs_DyJA_dm",
      "cluster_type" : "ANALYSIS",
      "charge_info" : {
        "charge_mode" : "postPaid"
      },
      "region" : "",
      "availability_zone" : "",
      "vpc_name" : "vpc-37cd",
      "subnet_id" : "1f8c5ca6-1f66-4096-bb00-baf175954f6e",
      "subnet_name" : "subnet",
      "components" : "Hadoop,Spark2x,HBase,Hive,Hue,Loader,Flink,Oozie,Ranger,Tez",
      "safe_mode" : "KERBEROS",
      "manager_admin_password" : "your password",
      "login_mode" : "PASSWORD",
      "node_root_password" : "your password",
      "log_collection" : 1,
      "mrs_ecs_default_agency" : "MRS_ECS_DEFAULT_AGENCY",
      "tags" : [ {
        "key" : "tag1",
        "value" : "111"
      }, {
        "key" : "tag2",
        "value" : "222"
      } ],
      "node_groups" : [ {
        "group_name" : "master_node_default_group",
        "node_num" : 2,
        "node_size" : "rc3.4xlarge.4.linux.bigdata",
        "root_volume" : {
          "type" : "SAS",
          "size" : 480
        },
        "data_volume" : {
          "type" : "SAS",
          "size" : 600
        },
        "data_volume_count" : 1
      }, {
        "group_name" : "core_node_analysis_group",
        "node_num" : 3,
        "node_size" : "rc3.4xlarge.4.linux.bigdata",
        "root_volume" : {
          "type" : "SAS",
          "size" : 480
        },
        "data_volume" : {
          "type" : "SAS",
          "size" : 600
        },
        "data_volume_count" : 1
      }, {
        "group_name" : "task_node_analysis_group",
        "node_num" : 3,
        "node_size" : "rc3.4xlarge.4.linux.bigdata",
        "root_volume" : {
          "type" : "SAS",
          "size" : 480
        },
        "data_volume" : {
          "type" : "SAS",
          "size" : 600
        },
        "data_volume_count" : 1,
        "auto_scaling_policy" : {
          "auto_scaling_enable" : true,
          "min_capacity" : 0,
          "max_capacity" : 1,
          "resources_plans" : [ {
            "period_type" : "daily",
            "start_time" : "12:00",
            "end_time" : "13:00",
            "min_capacity" : 2,
            "max_capacity" : 3,
            "effective_days" : [ "MONDAY" ]
          } ],
          "exec_scripts" : [ {
            "name" : "test",
            "uri" : "s3a://obs-mrstest/bootstrap/basic_success.sh",
            "parameters" : "",
            "nodes" : [ "master_node_default_group", "core_node_analysis_group", "task_node_analysis_group" ],
            "active_master" : false,
            "action_stage" : "before_scale_out",
            "fail_action" : "continue"
          } ],
          "rules" : [ {
            "name" : "default-expand-1",
            "description" : "",
            "adjustment_type" : "scale_out",
            "cool_down_minutes" : 5,
            "scaling_adjustment" : "1",
            "trigger" : {
              "metric_name" : "YARNAppRunning",
              "metric_value" : 100,
              "comparison_operator" : "GTOE",
              "evaluation_periods" : "1"
            }
          } ]
        }
      } ],
      "component_configs" : [ {
        "component_name" : "Hive",
        "configs" : [ {
          "key" : "hive.union.data.type.incompatible.enable",
          "value" : "true",
          "config_file_name" : "hive-site.xml"
        }, {
          "key" : "dfs.replication",
          "value" : "4",
          "config_file_name" : "hdfs-site.xml"
        } ]
      } ]
    }
  • 创建一个流式集群,集群版本号为MRS 3.1.0。包含一个Master节点组,节点数为2;一个Core节点组,节点数为3,一个Task节点组,节点数为0。每周一的12点至13点开启弹性伸缩。

    POST /v2/{project_id}/clusters
    
    {
      "cluster_version" : "MRS 3.1.0",
      "cluster_name" : "mrs_Dokle_dm",
      "cluster_type" : "STREAMING",
      "charge_info" : {
        "charge_mode" : "postPaid"
      },
      "region" : "",
      "availability_zone" : "",
      "vpc_name" : "vpc-37cd",
      "subnet_id" : "1f8c5ca6-1f66-4096-bb00-baf175954f6e",
      "subnet_name" : "subnet",
      "components" : "Storm,Kafka,Flume,Ranger",
      "safe_mode" : "KERBEROS",
      "manager_admin_password" : "your password",
      "login_mode" : "PASSWORD",
      "node_root_password" : "your password",
      "log_collection" : 1,
      "mrs_ecs_default_agency" : "MRS_ECS_DEFAULT_AGENCY",
      "tags" : [ {
        "key" : "tag1",
        "value" : "111"
      }, {
        "key" : "tag2",
        "value" : "222"
      } ],
      "node_groups" : [ {
        "group_name" : "master_node_default_group",
        "node_num" : 2,
        "node_size" : "rc3.4xlarge.4.linux.bigdata",
        "root_volume" : {
          "type" : "SAS",
          "size" : 480
        },
        "data_volume" : {
          "type" : "SAS",
          "size" : 600
        },
        "data_volume_count" : 1
      }, {
        "group_name" : "core_node_streaming_group",
        "node_num" : 3,
        "node_size" : "rc3.4xlarge.4.linux.bigdata",
        "root_volume" : {
          "type" : "SAS",
          "size" : 480
        },
        "data_volume" : {
          "type" : "SAS",
          "size" : 600
        },
        "data_volume_count" : 1
      }, {
        "group_name" : "task_node_streaming_group",
        "node_num" : 0,
        "node_size" : "rc3.4xlarge.4.linux.bigdata",
        "root_volume" : {
          "type" : "SAS",
          "size" : 480
        },
        "data_volume" : {
          "type" : "SAS",
          "size" : 600
        },
        "data_volume_count" : 1,
        "auto_scaling_policy" : {
          "auto_scaling_enable" : true,
          "min_capacity" : 0,
          "max_capacity" : 1,
          "resources_plans" : [ {
            "period_type" : "daily",
            "start_time" : "12:00",
            "end_time" : "13:00",
            "min_capacity" : 2,
            "max_capacity" : 3,
            "effective_days" : [ "MONDAY" ]
          } ],
          "rules" : [ {
            "name" : "default-expand-1",
            "description" : "",
            "adjustment_type" : "scale_out",
            "cool_down_minutes" : 5,
            "scaling_adjustment" : "1",
            "trigger" : {
              "metric_name" : "StormSlotAvailablePercentage",
              "metric_value" : 100,
              "comparison_operator" : "LTOE",
              "evaluation_periods" : "1"
            }
          } ]
        }
      } ]
    }
  • 创建一个混合集群,集群版本号为MRS 3.1.0。其中包含一个Master节点组,节点数为2;两个Core节点组,每个Core节点组的节点数均为3;两个Task节点组,一个Task节点组节点数为1,另一个节点数为0。

    POST /v2/{project_id}/clusters
    
    {
      "cluster_version" : "MRS 3.1.0",
      "cluster_name" : "mrs_onmm_dm",
      "cluster_type" : "MIXED",
      "charge_info" : {
        "charge_mode" : "postPaid"
      },
      "region" : "",
      "availability_zone" : "",
      "vpc_name" : "vpc-37cd",
      "subnet_id" : "1f8c5ca6-1f66-4096-bb00-baf175954f6e",
      "subnet_name" : "subnet",
      "components" : "Hadoop,Spark2x,HBase,Hive,Hue,Loader,Kafka,Storm,Flume,Flink,Oozie,Ranger,Tez",
      "safe_mode" : "KERBEROS",
      "manager_admin_password" : "your password",
      "login_mode" : "PASSWORD",
      "node_root_password" : "your password",
      "log_collection" : 1,
      "mrs_ecs_default_agency" : "MRS_ECS_DEFAULT_AGENCY",
      "tags" : [ {
        "key" : "tag1",
        "value" : "111"
      }, {
        "key" : "tag2",
        "value" : "222"
      } ],
      "node_groups" : [ {
        "group_name" : "master_node_default_group",
        "node_num" : 2,
        "node_size" : "Sit3.4xlarge.4.linux.bigdata",
        "root_volume" : {
          "type" : "SAS",
          "size" : 480
        },
        "data_volume" : {
          "type" : "SAS",
          "size" : 600
        },
        "data_volume_count" : 1
      }, {
        "group_name" : "core_node_streaming_group",
        "node_num" : 3,
        "node_size" : "Sit3.4xlarge.4.linux.bigdata",
        "root_volume" : {
          "type" : "SAS",
          "size" : 480
        },
        "data_volume" : {
          "type" : "SAS",
          "size" : 600
        },
        "data_volume_count" : 1
      }, {
        "group_name" : "core_node_analysis_group",
        "node_num" : 3,
        "node_size" : "Sit3.4xlarge.4.linux.bigdata",
        "root_volume" : {
          "type" : "SAS",
          "size" : 480
        },
        "data_volume" : {
          "type" : "SAS",
          "size" : 600
        },
        "data_volume_count" : 1
      }, {
        "group_name" : "task_node_analysis_group",
        "node_num" : 1,
        "node_size" : "Sit3.4xlarge.4.linux.bigdata",
        "root_volume" : {
          "type" : "SAS",
          "size" : 480
        },
        "data_volume" : {
          "type" : "SAS",
          "size" : 600
        },
        "data_volume_count" : 1
      }, {
        "group_name" : "task_node_streaming_group",
        "node_num" : 0,
        "node_size" : "Sit3.4xlarge.4.linux.bigdata",
        "root_volume" : {
          "type" : "SAS",
          "size" : 480
        },
        "data_volume" : {
          "type" : "SAS",
          "size" : 600
        },
        "data_volume_count" : 1
      } ]
    }
  • 创建自定义管控合设集群,集群版本号为MRS 3.1.0。包含一个Master节点组,节点数为3;两个Core节点组,一个节点数为3,另一个节点数为1。

    POST /v2/{project_id}/clusters
    
    {
      "cluster_version" : "MRS 3.1.0",
      "cluster_name" : "mrs_heshe_dm",
      "cluster_type" : "CUSTOM",
      "charge_info" : {
        "charge_mode" : "postPaid"
      },
      "region" : "",
      "availability_zone" : "",
      "vpc_name" : "vpc-37cd",
      "subnet_id" : "1f8c5ca6-1f66-4096-bb00-baf175954f6e",
      "subnet_name" : "subnet",
      "components" : "Hadoop,Spark2x,HBase,Hive,Hue,Loader,Kafka,Storm,Flume,Flink,Oozie,Ranger,Tez",
      "safe_mode" : "KERBEROS",
      "manager_admin_password" : "your password",
      "login_mode" : "PASSWORD",
      "node_root_password" : "your password",
      "mrs_ecs_default_agency" : "MRS_ECS_DEFAULT_AGENCY",
      "template_id" : "mgmt_control_combined_v2",
      "log_collection" : 1,
      "tags" : [ {
        "key" : "tag1",
        "value" : "111"
      }, {
        "key" : "tag2",
        "value" : "222"
      } ],
      "node_groups" : [ {
        "group_name" : "master_node_default_group",
        "node_num" : 3,
        "node_size" : "Sit3.4xlarge.4.linux.bigdata",
        "root_volume" : {
          "type" : "SAS",
          "size" : 480
        },
        "data_volume" : {
          "type" : "SAS",
          "size" : 600
        },
        "data_volume_count" : 1,
        "assigned_roles" : [ "OMSServer:1,2", "SlapdServer:1,2", "KerberosServer:1,2", "KerberosAdmin:1,2", "quorumpeer:1,2,3", "NameNode:2,3", "Zkfc:2,3", "JournalNode:1,2,3", "ResourceManager:2,3", "JobHistoryServer:2,3", "DBServer:1,3", "Hue:1,3", "LoaderServer:1,3", "MetaStore:1,2,3", "WebHCat:1,2,3", "HiveServer:1,2,3", "HMaster:2,3", "MonitorServer:1,2", "Nimbus:1,2", "UI:1,2", "JDBCServer2x:1,2,3", "JobHistory2x:2,3", "SparkResource2x:1,2,3", "oozie:2,3", "LoadBalancer:2,3", "TezUI:1,3", "TimelineServer:3", "RangerAdmin:1,2", "UserSync:2", "TagSync:2", "KerberosClient", "SlapdClient", "meta", "HSConsole:2,3", "FlinkResource:1,2,3", "DataNode:1,2,3", "NodeManager:1,2,3", "IndexServer2x:1,2", "ThriftServer:1,2,3", "RegionServer:1,2,3", "ThriftServer1:1,2,3", "RESTServer:1,2,3", "Broker:1,2,3", "Supervisor:1,2,3", "Logviewer:1,2,3", "Flume:1,2,3", "HSBroker:1,2,3" ]
      }, {
        "group_name" : "node_group_1",
        "node_num" : 3,
        "node_size" : "Sit3.4xlarge.4.linux.bigdata",
        "root_volume" : {
          "type" : "SAS",
          "size" : 480
        },
        "data_volume" : {
          "type" : "SAS",
          "size" : 600
        },
        "data_volume_count" : 1,
        "assigned_roles" : [ "DataNode", "NodeManager", "RegionServer", "Flume:1", "Broker", "Supervisor", "Logviewer", "HBaseIndexer", "KerberosClient", "SlapdClient", "meta", "HSBroker:1,2", "ThriftServer", "ThriftServer1", "RESTServer", "FlinkResource" ]
      }, {
        "group_name" : "node_group_2",
        "node_num" : 1,
        "node_size" : "Sit3.4xlarge.4.linux.bigdata",
        "root_volume" : {
          "type" : "SAS",
          "size" : 480
        },
        "data_volume" : {
          "type" : "SAS",
          "size" : 600
        },
        "data_volume_count" : 1,
        "assigned_roles" : [ "NodeManager", "KerberosClient", "SlapdClient", "meta", "FlinkResource" ]
      } ]
    }
  • 创建自定义管控分设集群,集群版本号为MRS 3.1.0。包含一个Master节点组,节点数为5;一个Core节点组,节点数为3。

    POST /v2/{project_id}/clusters
    
    {
      "cluster_version" : "MRS 3.1.0",
      "cluster_name" : "mrs_jdRU_dm01",
      "cluster_type" : "CUSTOM",
      "charge_info" : {
        "charge_mode" : "postPaid"
      },
      "region" : "",
      "availability_zone" : "",
      "vpc_name" : "vpc-37cd",
      "subnet_id" : "1f8c5ca6-1f66-4096-bb00-baf175954f6e",
      "subnet_name" : "subnet",
      "components" : "Hadoop,Spark2x,HBase,Hive,Hue,Loader,Kafka,Storm,Flume,Flink,Oozie,Ranger,Tez",
      "safe_mode" : "KERBEROS",
      "manager_admin_password" : "your password",
      "login_mode" : "PASSWORD",
      "node_root_password" : "your password",
      "mrs_ecs_default_agency" : "MRS_ECS_DEFAULT_AGENCY",
      "log_collection" : 1,
      "template_id" : "mgmt_control_separated_v2",
      "tags" : [ {
        "key" : "aaa",
        "value" : "111"
      }, {
        "key" : "bbb",
        "value" : "222"
      } ],
      "node_groups" : [ {
        "group_name" : "master_node_default_group",
        "node_num" : 5,
        "node_size" : "rc3.4xlarge.4.linux.bigdata",
        "root_volume" : {
          "type" : "SAS",
          "size" : 480
        },
        "data_volume" : {
          "type" : "SAS",
          "size" : 600
        },
        "data_volume_count" : 1,
        "assigned_roles" : [ "OMSServer:1,2", "SlapdServer:3,4", "KerberosServer:3,4", "KerberosAdmin:3,4", "quorumpeer:3,4,5", "NameNode:4,5", "Zkfc:4,5", "JournalNode:1,2,3,4,5", "ResourceManager:4,5", "JobHistoryServer:4,5", "DBServer:3,5", "Hue:1,2", "LoaderServer:1,2", "MetaStore:1,2,3,4,5", "WebHCat:1,2,3,4,5", "HiveServer:1,2,3,4,5", "HMaster:4,5", "MonitorServer:1,2", "Nimbus:1,2", "UI:1,2", "JDBCServer2x:1,2,3,4,5", "JobHistory2x:4,5", "SparkResource2x:1,2,3,4,5", "oozie:1,2", "LoadBalancer:1,2", "TezUI:1,2", "TimelineServer:5", "RangerAdmin:1,2", "KerberosClient", "SlapdClient", "meta", "HSConsole:1,2", "FlinkResource:1,2,3,4,5", "DataNode:1,2,3,4,5", "NodeManager:1,2,3,4,5", "IndexServer2x:1,2", "ThriftServer:1,2,3,4,5", "RegionServer:1,2,3,4,5", "ThriftServer1:1,2,3,4,5", "RESTServer:1,2,3,4,5", "Broker:1,2,3,4,5", "Supervisor:1,2,3,4,5", "Logviewer:1,2,3,4,5", "Flume:1,2,3,4,5", "HBaseIndexer:1,2,3,4,5", "TagSync:1", "UserSync:1" ]
      }, {
        "group_name" : "node_group_1",
        "node_num" : 3,
        "node_size" : "rc3.4xlarge.4.linux.bigdata",
        "root_volume" : {
          "type" : "SAS",
          "size" : 480
        },
        "data_volume" : {
          "type" : "SAS",
          "size" : 600
        },
        "data_volume_count" : 1,
        "assigned_roles" : [ "DataNode", "NodeManager", "RegionServer", "Flume:1", "Broker", "Supervisor", "Logviewer", "HBaseIndexer", "KerberosClient", "SlapdClient", "meta", "HSBroker:1,2", "ThriftServer", "ThriftServer1", "RESTServer", "FlinkResource" ]
      } ]
    }
  • 创建自定义数据分设集群,集群版本号为MRS 3.1.0。包含一个Master节点组,节点数为9;四个Core节点组,每个Core节点组的节点数均为3。

    POST /v2/{project_id}/clusters
    
    {
      "cluster_version" : "MRS 3.1.0",
      "cluster_name" : "mrs_jdRU_dm02",
      "cluster_type" : "CUSTOM",
      "charge_info" : {
        "charge_mode" : "postPaid"
      },
      "region" : "",
      "availability_zone" : "",
      "vpc_name" : "vpc-37cd",
      "subnet_id" : "1f8c5ca6-1f66-4096-bb00-baf175954f6e",
      "subnet_name" : "subnet",
      "components" : "Hadoop,Spark2x,HBase,Hive,Hue,Loader,Kafka,Storm,Flume,Flink,Oozie,Ranger,Tez",
      "safe_mode" : "KERBEROS",
      "manager_admin_password" : "your password",
      "login_mode" : "PASSWORD",
      "node_root_password" : "your password",
      "mrs_ecs_default_agency" : "MRS_ECS_DEFAULT_AGENCY",
      "template_id" : "mgmt_control_data_separated_v2",
      "log_collection" : 1,
      "tags" : [ {
        "key" : "aaa",
        "value" : "111"
      }, {
        "key" : "bbb",
        "value" : "222"
      } ],
      "node_groups" : [ {
        "group_name" : "master_node_default_group",
        "node_num" : 9,
        "node_size" : "rc3.4xlarge.4.linux.bigdata",
        "root_volume" : {
          "type" : "SAS",
          "size" : 480
        },
        "data_volume" : {
          "type" : "SAS",
          "size" : 600
        },
        "data_volume_count" : 1,
        "assigned_roles" : [ "OMSServer:1,2", "SlapdServer:5,6", "KerberosServer:5,6", "KerberosAdmin:5,6", "quorumpeer:5,6,7,8,9", "NameNode:3,4", "Zkfc:3,4", "JournalNode:5,6,7", "ResourceManager:8,9", "JobHistoryServer:8", "DBServer:8,9", "Hue:8,9", "FlinkResource:3,4", "LoaderServer:3,5", "MetaStore:8,9", "WebHCat:5", "HiveServer:8,9", "HMaster:8,9", "FTP-Server:3,4", "MonitorServer:3,4", "Nimbus:8,9", "UI:8,9", "JDBCServer2x:8,9", "JobHistory2x:8,9", "SparkResource2x:5,6,7", "oozie:4,5", "EsMaster:7,8,9", "LoadBalancer:8,9", "TezUI:5,6", "TimelineServer:5", "RangerAdmin:4,5", "UserSync:5", "TagSync:5", "KerberosClient", "SlapdClient", "meta", "HSBroker:5", "HSConsole:3,4", "FlinkResource:3,4" ]
      }, {
        "group_name" : "node_group_1",
        "node_num" : 3,
        "node_size" : "rc3.4xlarge.4.linux.bigdata",
        "root_volume" : {
          "type" : "SAS",
          "size" : 480
        },
        "data_volume" : {
          "type" : "SAS",
          "size" : 600
        },
        "data_volume_count" : 1,
        "assigned_roles" : [ "DataNode", "NodeManager", "RegionServer", "Flume:1", "GraphServer", "KerberosClient", "SlapdClient", "meta", "HSBroker:1,2" ]
      }, {
        "group_name" : "node_group_2",
        "node_num" : 3,
        "node_size" : "rc3.4xlarge.4.linux.bigdata",
        "root_volume" : {
          "type" : "SAS",
          "size" : 480
        },
        "data_volume" : {
          "type" : "SAS",
          "size" : 600
        },
        "data_volume_count" : 1,
        "assigned_roles" : [ "HBaseIndexer", "SolrServer[3]", "EsNode[2]", "KerberosClient", "SlapdClient", "meta", "SolrServerAdmin:1,2" ]
      }, {
        "group_name" : "node_group_3",
        "node_num" : 3,
        "node_size" : "rc3.4xlarge.4.linux.bigdata",
        "root_volume" : {
          "type" : "SAS",
          "size" : 480
        },
        "data_volume" : {
          "type" : "SAS",
          "size" : 600
        },
        "data_volume_count" : 1,
        "assigned_roles" : [ "KerberosClient", "SlapdClient", "meta" ]
      }, {
        "group_name" : "node_group_4",
        "node_num" : 3,
        "node_size" : "rc3.4xlarge.4.linux.bigdata",
        "root_volume" : {
          "type" : "SAS",
          "size" : 480
        },
        "data_volume" : {
          "type" : "SAS",
          "size" : 600
        },
        "data_volume_count" : 1,
        "assigned_roles" : [ "Broker", "Supervisor", "Logviewer", "KerberosClient", "SlapdClient", "meta" ]
      } ]
    }

响应示例

状态码: 200

正常响应示例。

{
  "cluster_id" : "da1592c2-bb7e-468d-9ac9-83246e95447a"
}

SDK代码示例

SDK代码示例如下。

  • 创建一个分析集群,集群版本号为MRS 3.1.0。包含一个Master节点组,节点数为2;一个Core节点组,节点数为3;一个Task节点组,节点数为3。每周一的12点至13点开启弹性伸缩。Hive组件的初始配置hive.union.data.type.incompatible.enable修改为true,dfs.replication修改为4。

      1
      2
      3
      4
      5
      6
      7
      8
      9
     10
     11
     12
     13
     14
     15
     16
     17
     18
     19
     20
     21
     22
     23
     24
     25
     26
     27
     28
     29
     30
     31
     32
     33
     34
     35
     36
     37
     38
     39
     40
     41
     42
     43
     44
     45
     46
     47
     48
     49
     50
     51
     52
     53
     54
     55
     56
     57
     58
     59
     60
     61
     62
     63
     64
     65
     66
     67
     68
     69
     70
     71
     72
     73
     74
     75
     76
     77
     78
     79
     80
     81
     82
     83
     84
     85
     86
     87
     88
     89
     90
     91
     92
     93
     94
     95
     96
     97
     98
     99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    159
    160
    161
    162
    163
    164
    165
    166
    167
    168
    169
    170
    171
    172
    173
    174
    175
    176
    177
    178
    179
    180
    181
    182
    183
    184
    185
    186
    187
    188
    189
    190
    191
    192
    193
    194
    195
    196
    197
    198
    package com.huaweicloud.sdk.test;
    
    import com.huaweicloud.sdk.core.auth.ICredential;
    import com.huaweicloud.sdk.core.auth.BasicCredentials;
    import com.huaweicloud.sdk.core.exception.ConnectionException;
    import com.huaweicloud.sdk.core.exception.RequestTimeoutException;
    import com.huaweicloud.sdk.core.exception.ServiceResponseException;
    import com.huaweicloud.sdk.mrs.v2.region.MrsRegion;
    import com.huaweicloud.sdk.mrs.v2.*;
    import com.huaweicloud.sdk.mrs.v2.model.*;
    
    import java.util.List;
    import java.util.ArrayList;
    
    public class CreateClusterSolution {
    
        public static void main(String[] args) {
            // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
            // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
            String ak = System.getenv("CLOUD_SDK_AK");
            String sk = System.getenv("CLOUD_SDK_SK");
            String projectId = "{project_id}";
    
            ICredential auth = new BasicCredentials()
                    .withProjectId(projectId)
                    .withAk(ak)
                    .withSk(sk);
    
            MrsClient client = MrsClient.newBuilder()
                    .withCredential(auth)
                    .withRegion(MrsRegion.valueOf("<YOUR REGION>"))
                    .build();
            CreateClusterRequest request = new CreateClusterRequest();
            CreateClusterReqV2 body = new CreateClusterReqV2();
            List<Config> listComponentConfigsConfigs = new ArrayList<>();
            listComponentConfigsConfigs.add(
                new Config()
                    .withKey("hive.union.data.type.incompatible.enable")
                    .withValue("true")
                    .withConfigFileName("hive-site.xml")
            );
            listComponentConfigsConfigs.add(
                new Config()
                    .withKey("dfs.replication")
                    .withValue("4")
                    .withConfigFileName("hdfs-site.xml")
            );
            List<ComponentConfig> listbodyComponentConfigs = new ArrayList<>();
            listbodyComponentConfigs.add(
                new ComponentConfig()
                    .withComponentName("Hive")
                    .withConfigs(listComponentConfigsConfigs)
            );
            List<String> listExecScriptsNodes = new ArrayList<>();
            listExecScriptsNodes.add("master_node_default_group");
            listExecScriptsNodes.add("core_node_analysis_group");
            listExecScriptsNodes.add("task_node_analysis_group");
            List<ScaleScript> listAutoScalingPolicyExecScripts = new ArrayList<>();
            listAutoScalingPolicyExecScripts.add(
                new ScaleScript()
                    .withName("test")
                    .withUri("s3a://obs-mrstest/bootstrap/basic_success.sh")
                    .withParameters("")
                    .withNodes(listExecScriptsNodes)
                    .withActiveMaster(false)
                    .withFailAction(ScaleScript.FailActionEnum.fromValue("continue"))
                    .withActionStage(ScaleScript.ActionStageEnum.fromValue("before_scale_out"))
            );
            Trigger triggerRules = new Trigger();
            triggerRules.withMetricName("YARNAppRunning")
                .withMetricValue("100")
                .withComparisonOperator("GTOE")
                .withEvaluationPeriods(1);
            List<Rule> listAutoScalingPolicyRules = new ArrayList<>();
            listAutoScalingPolicyRules.add(
                new Rule()
                    .withName("default-expand-1")
                    .withDescription("")
                    .withAdjustmentType(Rule.AdjustmentTypeEnum.fromValue("scale_out"))
                    .withCoolDownMinutes(5)
                    .withScalingAdjustment(1)
                    .withTrigger(triggerRules)
            );
            List<ResourcesPlan.EffectiveDaysEnum> listResourcesPlansEffectiveDays = new ArrayList<>();
            listResourcesPlansEffectiveDays.add(ResourcesPlan.EffectiveDaysEnum.fromValue("MONDAY"));
            List<ResourcesPlan> listAutoScalingPolicyResourcesPlans = new ArrayList<>();
            listAutoScalingPolicyResourcesPlans.add(
                new ResourcesPlan()
                    .withPeriodType("daily")
                    .withStartTime("12:00")
                    .withEndTime("13:00")
                    .withMinCapacity(2)
                    .withMaxCapacity(3)
                    .withEffectiveDays(listResourcesPlansEffectiveDays)
            );
            AutoScalingPolicy autoScalingPolicyNodeGroups = new AutoScalingPolicy();
            autoScalingPolicyNodeGroups.withAutoScalingEnable(true)
                .withMinCapacity(0)
                .withMaxCapacity(1)
                .withResourcesPlans(listAutoScalingPolicyResourcesPlans)
                .withRules(listAutoScalingPolicyRules)
                .withExecScripts(listAutoScalingPolicyExecScripts);
            Volume dataVolumeNodeGroups = new Volume();
            dataVolumeNodeGroups.withType("SAS")
                .withSize(600);
            Volume rootVolumeNodeGroups = new Volume();
            rootVolumeNodeGroups.withType("SAS")
                .withSize(480);
            Volume dataVolumeNodeGroups1 = new Volume();
            dataVolumeNodeGroups1.withType("SAS")
                .withSize(600);
            Volume rootVolumeNodeGroups1 = new Volume();
            rootVolumeNodeGroups1.withType("SAS")
                .withSize(480);
            Volume dataVolumeNodeGroups2 = new Volume();
            dataVolumeNodeGroups2.withType("SAS")
                .withSize(600);
            Volume rootVolumeNodeGroups2 = new Volume();
            rootVolumeNodeGroups2.withType("SAS")
                .withSize(480);
            List<NodeGroupV2> listbodyNodeGroups = new ArrayList<>();
            listbodyNodeGroups.add(
                new NodeGroupV2()
                    .withGroupName("master_node_default_group")
                    .withNodeNum(2)
                    .withNodeSize("rc3.4xlarge.4.linux.bigdata")
                    .withRootVolume(rootVolumeNodeGroups2)
                    .withDataVolume(dataVolumeNodeGroups2)
                    .withDataVolumeCount(1)
            );
            listbodyNodeGroups.add(
                new NodeGroupV2()
                    .withGroupName("core_node_analysis_group")
                    .withNodeNum(3)
                    .withNodeSize("rc3.4xlarge.4.linux.bigdata")
                    .withRootVolume(rootVolumeNodeGroups1)
                    .withDataVolume(dataVolumeNodeGroups1)
                    .withDataVolumeCount(1)
            );
            listbodyNodeGroups.add(
                new NodeGroupV2()
                    .withGroupName("task_node_analysis_group")
                    .withNodeNum(3)
                    .withNodeSize("rc3.4xlarge.4.linux.bigdata")
                    .withRootVolume(rootVolumeNodeGroups)
                    .withDataVolume(dataVolumeNodeGroups)
                    .withDataVolumeCount(1)
                    .withAutoScalingPolicy(autoScalingPolicyNodeGroups)
            );
            List<Tag> listbodyTags = new ArrayList<>();
            listbodyTags.add(
                new Tag()
                    .withKey("tag1")
                    .withValue("111")
            );
            listbodyTags.add(
                new Tag()
                    .withKey("tag2")
                    .withValue("222")
            );
            ChargeInfo chargeInfobody = new ChargeInfo();
            chargeInfobody.withChargeMode("postPaid");
            body.withComponentConfigs(listbodyComponentConfigs);
            body.withNodeGroups(listbodyNodeGroups);
            body.withLogCollection(CreateClusterReqV2.LogCollectionEnum.NUMBER_1);
            body.withTags(listbodyTags);
            body.withMrsEcsDefaultAgency("MRS_ECS_DEFAULT_AGENCY");
            body.withNodeRootPassword("your password");
            body.withLoginMode("PASSWORD");
            body.withManagerAdminPassword("your password");
            body.withSafeMode("KERBEROS");
            body.withAvailabilityZone("");
            body.withComponents("Hadoop,Spark2x,HBase,Hive,Hue,Loader,Flink,Oozie,Ranger,Tez");
            body.withSubnetName("subnet");
            body.withSubnetId("1f8c5ca6-1f66-4096-bb00-baf175954f6e");
            body.withVpcName("vpc-37cd");
            body.withRegion("");
            body.withChargeInfo(chargeInfobody);
            body.withClusterType("ANALYSIS");
            body.withClusterName("mrs_DyJA_dm");
            body.withClusterVersion("MRS 3.1.0");
            request.withBody(body);
            try {
                CreateClusterResponse response = client.createCluster(request);
                System.out.println(response.toString());
            } catch (ConnectionException e) {
                e.printStackTrace();
            } catch (RequestTimeoutException e) {
                e.printStackTrace();
            } catch (ServiceResponseException e) {
                e.printStackTrace();
                System.out.println(e.getHttpStatusCode());
                System.out.println(e.getRequestId());
                System.out.println(e.getErrorCode());
                System.out.println(e.getErrorMsg());
            }
        }
    }
    
  • 创建一个流式集群,集群版本号为MRS 3.1.0。包含一个Master节点组,节点数为2;一个Core节点组,节点数为3,一个Task节点组,节点数为0。每周一的12点至13点开启弹性伸缩。

      1
      2
      3
      4
      5
      6
      7
      8
      9
     10
     11
     12
     13
     14
     15
     16
     17
     18
     19
     20
     21
     22
     23
     24
     25
     26
     27
     28
     29
     30
     31
     32
     33
     34
     35
     36
     37
     38
     39
     40
     41
     42
     43
     44
     45
     46
     47
     48
     49
     50
     51
     52
     53
     54
     55
     56
     57
     58
     59
     60
     61
     62
     63
     64
     65
     66
     67
     68
     69
     70
     71
     72
     73
     74
     75
     76
     77
     78
     79
     80
     81
     82
     83
     84
     85
     86
     87
     88
     89
     90
     91
     92
     93
     94
     95
     96
     97
     98
     99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    159
    160
    161
    162
    package com.huaweicloud.sdk.test;
    
    import com.huaweicloud.sdk.core.auth.ICredential;
    import com.huaweicloud.sdk.core.auth.BasicCredentials;
    import com.huaweicloud.sdk.core.exception.ConnectionException;
    import com.huaweicloud.sdk.core.exception.RequestTimeoutException;
    import com.huaweicloud.sdk.core.exception.ServiceResponseException;
    import com.huaweicloud.sdk.mrs.v2.region.MrsRegion;
    import com.huaweicloud.sdk.mrs.v2.*;
    import com.huaweicloud.sdk.mrs.v2.model.*;
    
    import java.util.List;
    import java.util.ArrayList;
    
    public class CreateClusterSolution {
    
        public static void main(String[] args) {
            // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
            // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
            String ak = System.getenv("CLOUD_SDK_AK");
            String sk = System.getenv("CLOUD_SDK_SK");
            String projectId = "{project_id}";
    
            ICredential auth = new BasicCredentials()
                    .withProjectId(projectId)
                    .withAk(ak)
                    .withSk(sk);
    
            MrsClient client = MrsClient.newBuilder()
                    .withCredential(auth)
                    .withRegion(MrsRegion.valueOf("<YOUR REGION>"))
                    .build();
            CreateClusterRequest request = new CreateClusterRequest();
            CreateClusterReqV2 body = new CreateClusterReqV2();
            Trigger triggerRules = new Trigger();
            triggerRules.withMetricName("StormSlotAvailablePercentage")
                .withMetricValue("100")
                .withComparisonOperator("LTOE")
                .withEvaluationPeriods(1);
            List<Rule> listAutoScalingPolicyRules = new ArrayList<>();
            listAutoScalingPolicyRules.add(
                new Rule()
                    .withName("default-expand-1")
                    .withDescription("")
                    .withAdjustmentType(Rule.AdjustmentTypeEnum.fromValue("scale_out"))
                    .withCoolDownMinutes(5)
                    .withScalingAdjustment(1)
                    .withTrigger(triggerRules)
            );
            List<ResourcesPlan.EffectiveDaysEnum> listResourcesPlansEffectiveDays = new ArrayList<>();
            listResourcesPlansEffectiveDays.add(ResourcesPlan.EffectiveDaysEnum.fromValue("MONDAY"));
            List<ResourcesPlan> listAutoScalingPolicyResourcesPlans = new ArrayList<>();
            listAutoScalingPolicyResourcesPlans.add(
                new ResourcesPlan()
                    .withPeriodType("daily")
                    .withStartTime("12:00")
                    .withEndTime("13:00")
                    .withMinCapacity(2)
                    .withMaxCapacity(3)
                    .withEffectiveDays(listResourcesPlansEffectiveDays)
            );
            AutoScalingPolicy autoScalingPolicyNodeGroups = new AutoScalingPolicy();
            autoScalingPolicyNodeGroups.withAutoScalingEnable(true)
                .withMinCapacity(0)
                .withMaxCapacity(1)
                .withResourcesPlans(listAutoScalingPolicyResourcesPlans)
                .withRules(listAutoScalingPolicyRules);
            Volume dataVolumeNodeGroups = new Volume();
            dataVolumeNodeGroups.withType("SAS")
                .withSize(600);
            Volume rootVolumeNodeGroups = new Volume();
            rootVolumeNodeGroups.withType("SAS")
                .withSize(480);
            Volume dataVolumeNodeGroups1 = new Volume();
            dataVolumeNodeGroups1.withType("SAS")
                .withSize(600);
            Volume rootVolumeNodeGroups1 = new Volume();
            rootVolumeNodeGroups1.withType("SAS")
                .withSize(480);
            Volume dataVolumeNodeGroups2 = new Volume();
            dataVolumeNodeGroups2.withType("SAS")
                .withSize(600);
            Volume rootVolumeNodeGroups2 = new Volume();
            rootVolumeNodeGroups2.withType("SAS")
                .withSize(480);
            List<NodeGroupV2> listbodyNodeGroups = new ArrayList<>();
            listbodyNodeGroups.add(
                new NodeGroupV2()
                    .withGroupName("master_node_default_group")
                    .withNodeNum(2)
                    .withNodeSize("rc3.4xlarge.4.linux.bigdata")
                    .withRootVolume(rootVolumeNodeGroups2)
                    .withDataVolume(dataVolumeNodeGroups2)
                    .withDataVolumeCount(1)
            );
            listbodyNodeGroups.add(
                new NodeGroupV2()
                    .withGroupName("core_node_streaming_group")
                    .withNodeNum(3)
                    .withNodeSize("rc3.4xlarge.4.linux.bigdata")
                    .withRootVolume(rootVolumeNodeGroups1)
                    .withDataVolume(dataVolumeNodeGroups1)
                    .withDataVolumeCount(1)
            );
            listbodyNodeGroups.add(
                new NodeGroupV2()
                    .withGroupName("task_node_streaming_group")
                    .withNodeNum(0)
                    .withNodeSize("rc3.4xlarge.4.linux.bigdata")
                    .withRootVolume(rootVolumeNodeGroups)
                    .withDataVolume(dataVolumeNodeGroups)
                    .withDataVolumeCount(1)
                    .withAutoScalingPolicy(autoScalingPolicyNodeGroups)
            );
            List<Tag> listbodyTags = new ArrayList<>();
            listbodyTags.add(
                new Tag()
                    .withKey("tag1")
                    .withValue("111")
            );
            listbodyTags.add(
                new Tag()
                    .withKey("tag2")
                    .withValue("222")
            );
            ChargeInfo chargeInfobody = new ChargeInfo();
            chargeInfobody.withChargeMode("postPaid");
            body.withNodeGroups(listbodyNodeGroups);
            body.withLogCollection(CreateClusterReqV2.LogCollectionEnum.NUMBER_1);
            body.withTags(listbodyTags);
            body.withMrsEcsDefaultAgency("MRS_ECS_DEFAULT_AGENCY");
            body.withNodeRootPassword("your password");
            body.withLoginMode("PASSWORD");
            body.withManagerAdminPassword("your password");
            body.withSafeMode("KERBEROS");
            body.withAvailabilityZone("");
            body.withComponents("Storm,Kafka,Flume,Ranger");
            body.withSubnetName("subnet");
            body.withSubnetId("1f8c5ca6-1f66-4096-bb00-baf175954f6e");
            body.withVpcName("vpc-37cd");
            body.withRegion("");
            body.withChargeInfo(chargeInfobody);
            body.withClusterType("STREAMING");
            body.withClusterName("mrs_Dokle_dm");
            body.withClusterVersion("MRS 3.1.0");
            request.withBody(body);
            try {
                CreateClusterResponse response = client.createCluster(request);
                System.out.println(response.toString());
            } catch (ConnectionException e) {
                e.printStackTrace();
            } catch (RequestTimeoutException e) {
                e.printStackTrace();
            } catch (ServiceResponseException e) {
                e.printStackTrace();
                System.out.println(e.getHttpStatusCode());
                System.out.println(e.getRequestId());
                System.out.println(e.getErrorCode());
                System.out.println(e.getErrorMsg());
            }
        }
    }
    
  • 创建一个混合集群,集群版本号为MRS 3.1.0。其中包含一个Master节点组,节点数为2;两个Core节点组,每个Core节点组的节点数均为3;两个Task节点组,一个Task节点组节点数为1,另一个节点数为0。

      1
      2
      3
      4
      5
      6
      7
      8
      9
     10
     11
     12
     13
     14
     15
     16
     17
     18
     19
     20
     21
     22
     23
     24
     25
     26
     27
     28
     29
     30
     31
     32
     33
     34
     35
     36
     37
     38
     39
     40
     41
     42
     43
     44
     45
     46
     47
     48
     49
     50
     51
     52
     53
     54
     55
     56
     57
     58
     59
     60
     61
     62
     63
     64
     65
     66
     67
     68
     69
     70
     71
     72
     73
     74
     75
     76
     77
     78
     79
     80
     81
     82
     83
     84
     85
     86
     87
     88
     89
     90
     91
     92
     93
     94
     95
     96
     97
     98
     99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    package com.huaweicloud.sdk.test;
    
    import com.huaweicloud.sdk.core.auth.ICredential;
    import com.huaweicloud.sdk.core.auth.BasicCredentials;
    import com.huaweicloud.sdk.core.exception.ConnectionException;
    import com.huaweicloud.sdk.core.exception.RequestTimeoutException;
    import com.huaweicloud.sdk.core.exception.ServiceResponseException;
    import com.huaweicloud.sdk.mrs.v2.region.MrsRegion;
    import com.huaweicloud.sdk.mrs.v2.*;
    import com.huaweicloud.sdk.mrs.v2.model.*;
    
    import java.util.List;
    import java.util.ArrayList;
    
    public class CreateClusterSolution {
    
        public static void main(String[] args) {
            // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
            // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
            String ak = System.getenv("CLOUD_SDK_AK");
            String sk = System.getenv("CLOUD_SDK_SK");
            String projectId = "{project_id}";
    
            ICredential auth = new BasicCredentials()
                    .withProjectId(projectId)
                    .withAk(ak)
                    .withSk(sk);
    
            MrsClient client = MrsClient.newBuilder()
                    .withCredential(auth)
                    .withRegion(MrsRegion.valueOf("<YOUR REGION>"))
                    .build();
            CreateClusterRequest request = new CreateClusterRequest();
            CreateClusterReqV2 body = new CreateClusterReqV2();
            Volume dataVolumeNodeGroups = new Volume();
            dataVolumeNodeGroups.withType("SAS")
                .withSize(600);
            Volume rootVolumeNodeGroups = new Volume();
            rootVolumeNodeGroups.withType("SAS")
                .withSize(480);
            Volume dataVolumeNodeGroups1 = new Volume();
            dataVolumeNodeGroups1.withType("SAS")
                .withSize(600);
            Volume rootVolumeNodeGroups1 = new Volume();
            rootVolumeNodeGroups1.withType("SAS")
                .withSize(480);
            Volume dataVolumeNodeGroups2 = new Volume();
            dataVolumeNodeGroups2.withType("SAS")
                .withSize(600);
            Volume rootVolumeNodeGroups2 = new Volume();
            rootVolumeNodeGroups2.withType("SAS")
                .withSize(480);
            Volume dataVolumeNodeGroups3 = new Volume();
            dataVolumeNodeGroups3.withType("SAS")
                .withSize(600);
            Volume rootVolumeNodeGroups3 = new Volume();
            rootVolumeNodeGroups3.withType("SAS")
                .withSize(480);
            Volume dataVolumeNodeGroups4 = new Volume();
            dataVolumeNodeGroups4.withType("SAS")
                .withSize(600);
            Volume rootVolumeNodeGroups4 = new Volume();
            rootVolumeNodeGroups4.withType("SAS")
                .withSize(480);
            List<NodeGroupV2> listbodyNodeGroups = new ArrayList<>();
            listbodyNodeGroups.add(
                new NodeGroupV2()
                    .withGroupName("master_node_default_group")
                    .withNodeNum(2)
                    .withNodeSize("Sit3.4xlarge.4.linux.bigdata")
                    .withRootVolume(rootVolumeNodeGroups4)
                    .withDataVolume(dataVolumeNodeGroups4)
                    .withDataVolumeCount(1)
            );
            listbodyNodeGroups.add(
                new NodeGroupV2()
                    .withGroupName("core_node_streaming_group")
                    .withNodeNum(3)
                    .withNodeSize("Sit3.4xlarge.4.linux.bigdata")
                    .withRootVolume(rootVolumeNodeGroups3)
                    .withDataVolume(dataVolumeNodeGroups3)
                    .withDataVolumeCount(1)
            );
            listbodyNodeGroups.add(
                new NodeGroupV2()
                    .withGroupName("core_node_analysis_group")
                    .withNodeNum(3)
                    .withNodeSize("Sit3.4xlarge.4.linux.bigdata")
                    .withRootVolume(rootVolumeNodeGroups2)
                    .withDataVolume(dataVolumeNodeGroups2)
                    .withDataVolumeCount(1)
            );
            listbodyNodeGroups.add(
                new NodeGroupV2()
                    .withGroupName("task_node_analysis_group")
                    .withNodeNum(1)
                    .withNodeSize("Sit3.4xlarge.4.linux.bigdata")
                    .withRootVolume(rootVolumeNodeGroups1)
                    .withDataVolume(dataVolumeNodeGroups1)
                    .withDataVolumeCount(1)
            );
            listbodyNodeGroups.add(
                new NodeGroupV2()
                    .withGroupName("task_node_streaming_group")
                    .withNodeNum(0)
                    .withNodeSize("Sit3.4xlarge.4.linux.bigdata")
                    .withRootVolume(rootVolumeNodeGroups)
                    .withDataVolume(dataVolumeNodeGroups)
                    .withDataVolumeCount(1)
            );
            List<Tag> listbodyTags = new ArrayList<>();
            listbodyTags.add(
                new Tag()
                    .withKey("tag1")
                    .withValue("111")
            );
            listbodyTags.add(
                new Tag()
                    .withKey("tag2")
                    .withValue("222")
            );
            ChargeInfo chargeInfobody = new ChargeInfo();
            chargeInfobody.withChargeMode("postPaid");
            body.withNodeGroups(listbodyNodeGroups);
            body.withLogCollection(CreateClusterReqV2.LogCollectionEnum.NUMBER_1);
            body.withTags(listbodyTags);
            body.withMrsEcsDefaultAgency("MRS_ECS_DEFAULT_AGENCY");
            body.withNodeRootPassword("your password");
            body.withLoginMode("PASSWORD");
            body.withManagerAdminPassword("your password");
            body.withSafeMode("KERBEROS");
            body.withAvailabilityZone("");
            body.withComponents("Hadoop,Spark2x,HBase,Hive,Hue,Loader,Kafka,Storm,Flume,Flink,Oozie,Ranger,Tez");
            body.withSubnetName("subnet");
            body.withSubnetId("1f8c5ca6-1f66-4096-bb00-baf175954f6e");
            body.withVpcName("vpc-37cd");
            body.withRegion("");
            body.withChargeInfo(chargeInfobody);
            body.withClusterType("MIXED");
            body.withClusterName("mrs_onmm_dm");
            body.withClusterVersion("MRS 3.1.0");
            request.withBody(body);
            try {
                CreateClusterResponse response = client.createCluster(request);
                System.out.println(response.toString());
            } catch (ConnectionException e) {
                e.printStackTrace();
            } catch (RequestTimeoutException e) {
                e.printStackTrace();
            } catch (ServiceResponseException e) {
                e.printStackTrace();
                System.out.println(e.getHttpStatusCode());
                System.out.println(e.getRequestId());
                System.out.println(e.getErrorCode());
                System.out.println(e.getErrorMsg());
            }
        }
    }
    
  • 创建自定义管控合设集群,集群版本号为MRS 3.1.0。包含一个Master节点组,节点数为3;两个Core节点组,一个节点数为3,另一个节点数为1。

      1
      2
      3
      4
      5
      6
      7
      8
      9
     10
     11
     12
     13
     14
     15
     16
     17
     18
     19
     20
     21
     22
     23
     24
     25
     26
     27
     28
     29
     30
     31
     32
     33
     34
     35
     36
     37
     38
     39
     40
     41
     42
     43
     44
     45
     46
     47
     48
     49
     50
     51
     52
     53
     54
     55
     56
     57
     58
     59
     60
     61
     62
     63
     64
     65
     66
     67
     68
     69
     70
     71
     72
     73
     74
     75
     76
     77
     78
     79
     80
     81
     82
     83
     84
     85
     86
     87
     88
     89
     90
     91
     92
     93
     94
     95
     96
     97
     98
     99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    159
    160
    161
    162
    163
    164
    165
    166
    167
    168
    169
    170
    171
    172
    173
    174
    175
    176
    177
    178
    179
    180
    181
    182
    183
    184
    185
    186
    187
    188
    189
    190
    191
    192
    193
    194
    195
    196
    197
    198
    199
    200
    201
    202
    203
    package com.huaweicloud.sdk.test;
    
    import com.huaweicloud.sdk.core.auth.ICredential;
    import com.huaweicloud.sdk.core.auth.BasicCredentials;
    import com.huaweicloud.sdk.core.exception.ConnectionException;
    import com.huaweicloud.sdk.core.exception.RequestTimeoutException;
    import com.huaweicloud.sdk.core.exception.ServiceResponseException;
    import com.huaweicloud.sdk.mrs.v2.region.MrsRegion;
    import com.huaweicloud.sdk.mrs.v2.*;
    import com.huaweicloud.sdk.mrs.v2.model.*;
    
    import java.util.List;
    import java.util.ArrayList;
    
    public class CreateClusterSolution {
    
        public static void main(String[] args) {
            // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
            // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
            String ak = System.getenv("CLOUD_SDK_AK");
            String sk = System.getenv("CLOUD_SDK_SK");
            String projectId = "{project_id}";
    
            ICredential auth = new BasicCredentials()
                    .withProjectId(projectId)
                    .withAk(ak)
                    .withSk(sk);
    
            MrsClient client = MrsClient.newBuilder()
                    .withCredential(auth)
                    .withRegion(MrsRegion.valueOf("<YOUR REGION>"))
                    .build();
            CreateClusterRequest request = new CreateClusterRequest();
            CreateClusterReqV2 body = new CreateClusterReqV2();
            List<String> listNodeGroupsAssignedRoles = new ArrayList<>();
            listNodeGroupsAssignedRoles.add("NodeManager");
            listNodeGroupsAssignedRoles.add("KerberosClient");
            listNodeGroupsAssignedRoles.add("SlapdClient");
            listNodeGroupsAssignedRoles.add("meta");
            listNodeGroupsAssignedRoles.add("FlinkResource");
            Volume dataVolumeNodeGroups = new Volume();
            dataVolumeNodeGroups.withType("SAS")
                .withSize(600);
            Volume rootVolumeNodeGroups = new Volume();
            rootVolumeNodeGroups.withType("SAS")
                .withSize(480);
            List<String> listNodeGroupsAssignedRoles1 = new ArrayList<>();
            listNodeGroupsAssignedRoles1.add("DataNode");
            listNodeGroupsAssignedRoles1.add("NodeManager");
            listNodeGroupsAssignedRoles1.add("RegionServer");
            listNodeGroupsAssignedRoles1.add("Flume:1");
            listNodeGroupsAssignedRoles1.add("Broker");
            listNodeGroupsAssignedRoles1.add("Supervisor");
            listNodeGroupsAssignedRoles1.add("Logviewer");
            listNodeGroupsAssignedRoles1.add("HBaseIndexer");
            listNodeGroupsAssignedRoles1.add("KerberosClient");
            listNodeGroupsAssignedRoles1.add("SlapdClient");
            listNodeGroupsAssignedRoles1.add("meta");
            listNodeGroupsAssignedRoles1.add("HSBroker:1,2");
            listNodeGroupsAssignedRoles1.add("ThriftServer");
            listNodeGroupsAssignedRoles1.add("ThriftServer1");
            listNodeGroupsAssignedRoles1.add("RESTServer");
            listNodeGroupsAssignedRoles1.add("FlinkResource");
            Volume dataVolumeNodeGroups1 = new Volume();
            dataVolumeNodeGroups1.withType("SAS")
                .withSize(600);
            Volume rootVolumeNodeGroups1 = new Volume();
            rootVolumeNodeGroups1.withType("SAS")
                .withSize(480);
            List<String> listNodeGroupsAssignedRoles2 = new ArrayList<>();
            listNodeGroupsAssignedRoles2.add("OMSServer:1,2");
            listNodeGroupsAssignedRoles2.add("SlapdServer:1,2");
            listNodeGroupsAssignedRoles2.add("KerberosServer:1,2");
            listNodeGroupsAssignedRoles2.add("KerberosAdmin:1,2");
            listNodeGroupsAssignedRoles2.add("quorumpeer:1,2,3");
            listNodeGroupsAssignedRoles2.add("NameNode:2,3");
            listNodeGroupsAssignedRoles2.add("Zkfc:2,3");
            listNodeGroupsAssignedRoles2.add("JournalNode:1,2,3");
            listNodeGroupsAssignedRoles2.add("ResourceManager:2,3");
            listNodeGroupsAssignedRoles2.add("JobHistoryServer:2,3");
            listNodeGroupsAssignedRoles2.add("DBServer:1,3");
            listNodeGroupsAssignedRoles2.add("Hue:1,3");
            listNodeGroupsAssignedRoles2.add("LoaderServer:1,3");
            listNodeGroupsAssignedRoles2.add("MetaStore:1,2,3");
            listNodeGroupsAssignedRoles2.add("WebHCat:1,2,3");
            listNodeGroupsAssignedRoles2.add("HiveServer:1,2,3");
            listNodeGroupsAssignedRoles2.add("HMaster:2,3");
            listNodeGroupsAssignedRoles2.add("MonitorServer:1,2");
            listNodeGroupsAssignedRoles2.add("Nimbus:1,2");
            listNodeGroupsAssignedRoles2.add("UI:1,2");
            listNodeGroupsAssignedRoles2.add("JDBCServer2x:1,2,3");
            listNodeGroupsAssignedRoles2.add("JobHistory2x:2,3");
            listNodeGroupsAssignedRoles2.add("SparkResource2x:1,2,3");
            listNodeGroupsAssignedRoles2.add("oozie:2,3");
            listNodeGroupsAssignedRoles2.add("LoadBalancer:2,3");
            listNodeGroupsAssignedRoles2.add("TezUI:1,3");
            listNodeGroupsAssignedRoles2.add("TimelineServer:3");
            listNodeGroupsAssignedRoles2.add("RangerAdmin:1,2");
            listNodeGroupsAssignedRoles2.add("UserSync:2");
            listNodeGroupsAssignedRoles2.add("TagSync:2");
            listNodeGroupsAssignedRoles2.add("KerberosClient");
            listNodeGroupsAssignedRoles2.add("SlapdClient");
            listNodeGroupsAssignedRoles2.add("meta");
            listNodeGroupsAssignedRoles2.add("HSConsole:2,3");
            listNodeGroupsAssignedRoles2.add("FlinkResource:1,2,3");
            listNodeGroupsAssignedRoles2.add("DataNode:1,2,3");
            listNodeGroupsAssignedRoles2.add("NodeManager:1,2,3");
            listNodeGroupsAssignedRoles2.add("IndexServer2x:1,2");
            listNodeGroupsAssignedRoles2.add("ThriftServer:1,2,3");
            listNodeGroupsAssignedRoles2.add("RegionServer:1,2,3");
            listNodeGroupsAssignedRoles2.add("ThriftServer1:1,2,3");
            listNodeGroupsAssignedRoles2.add("RESTServer:1,2,3");
            listNodeGroupsAssignedRoles2.add("Broker:1,2,3");
            listNodeGroupsAssignedRoles2.add("Supervisor:1,2,3");
            listNodeGroupsAssignedRoles2.add("Logviewer:1,2,3");
            listNodeGroupsAssignedRoles2.add("Flume:1,2,3");
            listNodeGroupsAssignedRoles2.add("HSBroker:1,2,3");
            Volume dataVolumeNodeGroups2 = new Volume();
            dataVolumeNodeGroups2.withType("SAS")
                .withSize(600);
            Volume rootVolumeNodeGroups2 = new Volume();
            rootVolumeNodeGroups2.withType("SAS")
                .withSize(480);
            List<NodeGroupV2> listbodyNodeGroups = new ArrayList<>();
            listbodyNodeGroups.add(
                new NodeGroupV2()
                    .withGroupName("master_node_default_group")
                    .withNodeNum(3)
                    .withNodeSize("Sit3.4xlarge.4.linux.bigdata")
                    .withRootVolume(rootVolumeNodeGroups2)
                    .withDataVolume(dataVolumeNodeGroups2)
                    .withDataVolumeCount(1)
                    .withAssignedRoles(listNodeGroupsAssignedRoles2)
            );
            listbodyNodeGroups.add(
                new NodeGroupV2()
                    .withGroupName("node_group_1")
                    .withNodeNum(3)
                    .withNodeSize("Sit3.4xlarge.4.linux.bigdata")
                    .withRootVolume(rootVolumeNodeGroups1)
                    .withDataVolume(dataVolumeNodeGroups1)
                    .withDataVolumeCount(1)
                    .withAssignedRoles(listNodeGroupsAssignedRoles1)
            );
            listbodyNodeGroups.add(
                new NodeGroupV2()
                    .withGroupName("node_group_2")
                    .withNodeNum(1)
                    .withNodeSize("Sit3.4xlarge.4.linux.bigdata")
                    .withRootVolume(rootVolumeNodeGroups)
                    .withDataVolume(dataVolumeNodeGroups)
                    .withDataVolumeCount(1)
                    .withAssignedRoles(listNodeGroupsAssignedRoles)
            );
            List<Tag> listbodyTags = new ArrayList<>();
            listbodyTags.add(
                new Tag()
                    .withKey("tag1")
                    .withValue("111")
            );
            listbodyTags.add(
                new Tag()
                    .withKey("tag2")
                    .withValue("222")
            );
            ChargeInfo chargeInfobody = new ChargeInfo();
            chargeInfobody.withChargeMode("postPaid");
            body.withNodeGroups(listbodyNodeGroups);
            body.withLogCollection(CreateClusterReqV2.LogCollectionEnum.NUMBER_1);
            body.withTags(listbodyTags);
            body.withTemplateId("mgmt_control_combined_v2");
            body.withMrsEcsDefaultAgency("MRS_ECS_DEFAULT_AGENCY");
            body.withNodeRootPassword("your password");
            body.withLoginMode("PASSWORD");
            body.withManagerAdminPassword("your password");
            body.withSafeMode("KERBEROS");
            body.withAvailabilityZone("");
            body.withComponents("Hadoop,Spark2x,HBase,Hive,Hue,Loader,Kafka,Storm,Flume,Flink,Oozie,Ranger,Tez");
            body.withSubnetName("subnet");
            body.withSubnetId("1f8c5ca6-1f66-4096-bb00-baf175954f6e");
            body.withVpcName("vpc-37cd");
            body.withRegion("");
            body.withChargeInfo(chargeInfobody);
            body.withClusterType("CUSTOM");
            body.withClusterName("mrs_heshe_dm");
            body.withClusterVersion("MRS 3.1.0");
            request.withBody(body);
            try {
                CreateClusterResponse response = client.createCluster(request);
                System.out.println(response.toString());
            } catch (ConnectionException e) {
                e.printStackTrace();
            } catch (RequestTimeoutException e) {
                e.printStackTrace();
            } catch (ServiceResponseException e) {
                e.printStackTrace();
                System.out.println(e.getHttpStatusCode());
                System.out.println(e.getRequestId());
                System.out.println(e.getErrorCode());
                System.out.println(e.getErrorMsg());
            }
        }
    }
    
  • 创建自定义管控分设集群,集群版本号为MRS 3.1.0。包含一个Master节点组,节点数为5;一个Core节点组,节点数为3。

      1
      2
      3
      4
      5
      6
      7
      8
      9
     10
     11
     12
     13
     14
     15
     16
     17
     18
     19
     20
     21
     22
     23
     24
     25
     26
     27
     28
     29
     30
     31
     32
     33
     34
     35
     36
     37
     38
     39
     40
     41
     42
     43
     44
     45
     46
     47
     48
     49
     50
     51
     52
     53
     54
     55
     56
     57
     58
     59
     60
     61
     62
     63
     64
     65
     66
     67
     68
     69
     70
     71
     72
     73
     74
     75
     76
     77
     78
     79
     80
     81
     82
     83
     84
     85
     86
     87
     88
     89
     90
     91
     92
     93
     94
     95
     96
     97
     98
     99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    159
    160
    161
    162
    163
    164
    165
    166
    167
    168
    169
    170
    171
    172
    173
    174
    175
    176
    177
    178
    179
    180
    181
    package com.huaweicloud.sdk.test;
    
    import com.huaweicloud.sdk.core.auth.ICredential;
    import com.huaweicloud.sdk.core.auth.BasicCredentials;
    import com.huaweicloud.sdk.core.exception.ConnectionException;
    import com.huaweicloud.sdk.core.exception.RequestTimeoutException;
    import com.huaweicloud.sdk.core.exception.ServiceResponseException;
    import com.huaweicloud.sdk.mrs.v2.region.MrsRegion;
    import com.huaweicloud.sdk.mrs.v2.*;
    import com.huaweicloud.sdk.mrs.v2.model.*;
    
    import java.util.List;
    import java.util.ArrayList;
    
    public class CreateClusterSolution {
    
        public static void main(String[] args) {
            // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
            // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
            String ak = System.getenv("CLOUD_SDK_AK");
            String sk = System.getenv("CLOUD_SDK_SK");
            String projectId = "{project_id}";
    
            ICredential auth = new BasicCredentials()
                    .withProjectId(projectId)
                    .withAk(ak)
                    .withSk(sk);
    
            MrsClient client = MrsClient.newBuilder()
                    .withCredential(auth)
                    .withRegion(MrsRegion.valueOf("<YOUR REGION>"))
                    .build();
            CreateClusterRequest request = new CreateClusterRequest();
            CreateClusterReqV2 body = new CreateClusterReqV2();
            List<String> listNodeGroupsAssignedRoles = new ArrayList<>();
            listNodeGroupsAssignedRoles.add("DataNode");
            listNodeGroupsAssignedRoles.add("NodeManager");
            listNodeGroupsAssignedRoles.add("RegionServer");
            listNodeGroupsAssignedRoles.add("Flume:1");
            listNodeGroupsAssignedRoles.add("Broker");
            listNodeGroupsAssignedRoles.add("Supervisor");
            listNodeGroupsAssignedRoles.add("Logviewer");
            listNodeGroupsAssignedRoles.add("HBaseIndexer");
            listNodeGroupsAssignedRoles.add("KerberosClient");
            listNodeGroupsAssignedRoles.add("SlapdClient");
            listNodeGroupsAssignedRoles.add("meta");
            listNodeGroupsAssignedRoles.add("HSBroker:1,2");
            listNodeGroupsAssignedRoles.add("ThriftServer");
            listNodeGroupsAssignedRoles.add("ThriftServer1");
            listNodeGroupsAssignedRoles.add("RESTServer");
            listNodeGroupsAssignedRoles.add("FlinkResource");
            Volume dataVolumeNodeGroups = new Volume();
            dataVolumeNodeGroups.withType("SAS")
                .withSize(600);
            Volume rootVolumeNodeGroups = new Volume();
            rootVolumeNodeGroups.withType("SAS")
                .withSize(480);
            List<String> listNodeGroupsAssignedRoles1 = new ArrayList<>();
            listNodeGroupsAssignedRoles1.add("OMSServer:1,2");
            listNodeGroupsAssignedRoles1.add("SlapdServer:3,4");
            listNodeGroupsAssignedRoles1.add("KerberosServer:3,4");
            listNodeGroupsAssignedRoles1.add("KerberosAdmin:3,4");
            listNodeGroupsAssignedRoles1.add("quorumpeer:3,4,5");
            listNodeGroupsAssignedRoles1.add("NameNode:4,5");
            listNodeGroupsAssignedRoles1.add("Zkfc:4,5");
            listNodeGroupsAssignedRoles1.add("JournalNode:1,2,3,4,5");
            listNodeGroupsAssignedRoles1.add("ResourceManager:4,5");
            listNodeGroupsAssignedRoles1.add("JobHistoryServer:4,5");
            listNodeGroupsAssignedRoles1.add("DBServer:3,5");
            listNodeGroupsAssignedRoles1.add("Hue:1,2");
            listNodeGroupsAssignedRoles1.add("LoaderServer:1,2");
            listNodeGroupsAssignedRoles1.add("MetaStore:1,2,3,4,5");
            listNodeGroupsAssignedRoles1.add("WebHCat:1,2,3,4,5");
            listNodeGroupsAssignedRoles1.add("HiveServer:1,2,3,4,5");
            listNodeGroupsAssignedRoles1.add("HMaster:4,5");
            listNodeGroupsAssignedRoles1.add("MonitorServer:1,2");
            listNodeGroupsAssignedRoles1.add("Nimbus:1,2");
            listNodeGroupsAssignedRoles1.add("UI:1,2");
            listNodeGroupsAssignedRoles1.add("JDBCServer2x:1,2,3,4,5");
            listNodeGroupsAssignedRoles1.add("JobHistory2x:4,5");
            listNodeGroupsAssignedRoles1.add("SparkResource2x:1,2,3,4,5");
            listNodeGroupsAssignedRoles1.add("oozie:1,2");
            listNodeGroupsAssignedRoles1.add("LoadBalancer:1,2");
            listNodeGroupsAssignedRoles1.add("TezUI:1,2");
            listNodeGroupsAssignedRoles1.add("TimelineServer:5");
            listNodeGroupsAssignedRoles1.add("RangerAdmin:1,2");
            listNodeGroupsAssignedRoles1.add("KerberosClient");
            listNodeGroupsAssignedRoles1.add("SlapdClient");
            listNodeGroupsAssignedRoles1.add("meta");
            listNodeGroupsAssignedRoles1.add("HSConsole:1,2");
            listNodeGroupsAssignedRoles1.add("FlinkResource:1,2,3,4,5");
            listNodeGroupsAssignedRoles1.add("DataNode:1,2,3,4,5");
            listNodeGroupsAssignedRoles1.add("NodeManager:1,2,3,4,5");
            listNodeGroupsAssignedRoles1.add("IndexServer2x:1,2");
            listNodeGroupsAssignedRoles1.add("ThriftServer:1,2,3,4,5");
            listNodeGroupsAssignedRoles1.add("RegionServer:1,2,3,4,5");
            listNodeGroupsAssignedRoles1.add("ThriftServer1:1,2,3,4,5");
            listNodeGroupsAssignedRoles1.add("RESTServer:1,2,3,4,5");
            listNodeGroupsAssignedRoles1.add("Broker:1,2,3,4,5");
            listNodeGroupsAssignedRoles1.add("Supervisor:1,2,3,4,5");
            listNodeGroupsAssignedRoles1.add("Logviewer:1,2,3,4,5");
            listNodeGroupsAssignedRoles1.add("Flume:1,2,3,4,5");
            listNodeGroupsAssignedRoles1.add("HBaseIndexer:1,2,3,4,5");
            listNodeGroupsAssignedRoles1.add("TagSync:1");
            listNodeGroupsAssignedRoles1.add("UserSync:1");
            Volume dataVolumeNodeGroups1 = new Volume();
            dataVolumeNodeGroups1.withType("SAS")
                .withSize(600);
            Volume rootVolumeNodeGroups1 = new Volume();
            rootVolumeNodeGroups1.withType("SAS")
                .withSize(480);
            List<NodeGroupV2> listbodyNodeGroups = new ArrayList<>();
            listbodyNodeGroups.add(
                new NodeGroupV2()
                    .withGroupName("master_node_default_group")
                    .withNodeNum(5)
                    .withNodeSize("rc3.4xlarge.4.linux.bigdata")
                    .withRootVolume(rootVolumeNodeGroups1)
                    .withDataVolume(dataVolumeNodeGroups1)
                    .withDataVolumeCount(1)
                    .withAssignedRoles(listNodeGroupsAssignedRoles1)
            );
            listbodyNodeGroups.add(
                new NodeGroupV2()
                    .withGroupName("node_group_1")
                    .withNodeNum(3)
                    .withNodeSize("rc3.4xlarge.4.linux.bigdata")
                    .withRootVolume(rootVolumeNodeGroups)
                    .withDataVolume(dataVolumeNodeGroups)
                    .withDataVolumeCount(1)
                    .withAssignedRoles(listNodeGroupsAssignedRoles)
            );
            List<Tag> listbodyTags = new ArrayList<>();
            listbodyTags.add(
                new Tag()
                    .withKey("aaa")
                    .withValue("111")
            );
            listbodyTags.add(
                new Tag()
                    .withKey("bbb")
                    .withValue("222")
            );
            ChargeInfo chargeInfobody = new ChargeInfo();
            chargeInfobody.withChargeMode("postPaid");
            body.withNodeGroups(listbodyNodeGroups);
            body.withLogCollection(CreateClusterReqV2.LogCollectionEnum.NUMBER_1);
            body.withTags(listbodyTags);
            body.withTemplateId("mgmt_control_separated_v2");
            body.withMrsEcsDefaultAgency("MRS_ECS_DEFAULT_AGENCY");
            body.withNodeRootPassword("your password");
            body.withLoginMode("PASSWORD");
            body.withManagerAdminPassword("your password");
            body.withSafeMode("KERBEROS");
            body.withAvailabilityZone("");
            body.withComponents("Hadoop,Spark2x,HBase,Hive,Hue,Loader,Kafka,Storm,Flume,Flink,Oozie,Ranger,Tez");
            body.withSubnetName("subnet");
            body.withSubnetId("1f8c5ca6-1f66-4096-bb00-baf175954f6e");
            body.withVpcName("vpc-37cd");
            body.withRegion("");
            body.withChargeInfo(chargeInfobody);
            body.withClusterType("CUSTOM");
            body.withClusterName("mrs_jdRU_dm01");
            body.withClusterVersion("MRS 3.1.0");
            request.withBody(body);
            try {
                CreateClusterResponse response = client.createCluster(request);
                System.out.println(response.toString());
            } catch (ConnectionException e) {
                e.printStackTrace();
            } catch (RequestTimeoutException e) {
                e.printStackTrace();
            } catch (ServiceResponseException e) {
                e.printStackTrace();
                System.out.println(e.getHttpStatusCode());
                System.out.println(e.getRequestId());
                System.out.println(e.getErrorCode());
                System.out.println(e.getErrorMsg());
            }
        }
    }
    
  • 创建自定义数据分设集群,集群版本号为MRS 3.1.0。包含一个Master节点组,节点数为9;四个Core节点组,每个Core节点组的节点数均为3。

      1
      2
      3
      4
      5
      6
      7
      8
      9
     10
     11
     12
     13
     14
     15
     16
     17
     18
     19
     20
     21
     22
     23
     24
     25
     26
     27
     28
     29
     30
     31
     32
     33
     34
     35
     36
     37
     38
     39
     40
     41
     42
     43
     44
     45
     46
     47
     48
     49
     50
     51
     52
     53
     54
     55
     56
     57
     58
     59
     60
     61
     62
     63
     64
     65
     66
     67
     68
     69
     70
     71
     72
     73
     74
     75
     76
     77
     78
     79
     80
     81
     82
     83
     84
     85
     86
     87
     88
     89
     90
     91
     92
     93
     94
     95
     96
     97
     98
     99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    159
    160
    161
    162
    163
    164
    165
    166
    167
    168
    169
    170
    171
    172
    173
    174
    175
    176
    177
    178
    179
    180
    181
    182
    183
    184
    185
    186
    187
    188
    189
    190
    191
    192
    193
    194
    195
    196
    197
    198
    199
    200
    201
    202
    203
    204
    205
    206
    207
    208
    209
    210
    211
    212
    213
    214
    215
    216
    217
    218
    219
    220
    221
    222
    223
    224
    225
    226
    227
    228
    229
    230
    231
    232
    233
    package com.huaweicloud.sdk.test;
    
    import com.huaweicloud.sdk.core.auth.ICredential;
    import com.huaweicloud.sdk.core.auth.BasicCredentials;
    import com.huaweicloud.sdk.core.exception.ConnectionException;
    import com.huaweicloud.sdk.core.exception.RequestTimeoutException;
    import com.huaweicloud.sdk.core.exception.ServiceResponseException;
    import com.huaweicloud.sdk.mrs.v2.region.MrsRegion;
    import com.huaweicloud.sdk.mrs.v2.*;
    import com.huaweicloud.sdk.mrs.v2.model.*;
    
    import java.util.List;
    import java.util.ArrayList;
    
    public class CreateClusterSolution {
    
        public static void main(String[] args) {
            // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
            // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
            String ak = System.getenv("CLOUD_SDK_AK");
            String sk = System.getenv("CLOUD_SDK_SK");
            String projectId = "{project_id}";
    
            ICredential auth = new BasicCredentials()
                    .withProjectId(projectId)
                    .withAk(ak)
                    .withSk(sk);
    
            MrsClient client = MrsClient.newBuilder()
                    .withCredential(auth)
                    .withRegion(MrsRegion.valueOf("<YOUR REGION>"))
                    .build();
            CreateClusterRequest request = new CreateClusterRequest();
            CreateClusterReqV2 body = new CreateClusterReqV2();
            List<String> listNodeGroupsAssignedRoles = new ArrayList<>();
            listNodeGroupsAssignedRoles.add("Broker");
            listNodeGroupsAssignedRoles.add("Supervisor");
            listNodeGroupsAssignedRoles.add("Logviewer");
            listNodeGroupsAssignedRoles.add("KerberosClient");
            listNodeGroupsAssignedRoles.add("SlapdClient");
            listNodeGroupsAssignedRoles.add("meta");
            Volume dataVolumeNodeGroups = new Volume();
            dataVolumeNodeGroups.withType("SAS")
                .withSize(600);
            Volume rootVolumeNodeGroups = new Volume();
            rootVolumeNodeGroups.withType("SAS")
                .withSize(480);
            List<String> listNodeGroupsAssignedRoles1 = new ArrayList<>();
            listNodeGroupsAssignedRoles1.add("KerberosClient");
            listNodeGroupsAssignedRoles1.add("SlapdClient");
            listNodeGroupsAssignedRoles1.add("meta");
            Volume dataVolumeNodeGroups1 = new Volume();
            dataVolumeNodeGroups1.withType("SAS")
                .withSize(600);
            Volume rootVolumeNodeGroups1 = new Volume();
            rootVolumeNodeGroups1.withType("SAS")
                .withSize(480);
            List<String> listNodeGroupsAssignedRoles2 = new ArrayList<>();
            listNodeGroupsAssignedRoles2.add("HBaseIndexer");
            listNodeGroupsAssignedRoles2.add("SolrServer[3]");
            listNodeGroupsAssignedRoles2.add("EsNode[2]");
            listNodeGroupsAssignedRoles2.add("KerberosClient");
            listNodeGroupsAssignedRoles2.add("SlapdClient");
            listNodeGroupsAssignedRoles2.add("meta");
            listNodeGroupsAssignedRoles2.add("SolrServerAdmin:1,2");
            Volume dataVolumeNodeGroups2 = new Volume();
            dataVolumeNodeGroups2.withType("SAS")
                .withSize(600);
            Volume rootVolumeNodeGroups2 = new Volume();
            rootVolumeNodeGroups2.withType("SAS")
                .withSize(480);
            List<String> listNodeGroupsAssignedRoles3 = new ArrayList<>();
            listNodeGroupsAssignedRoles3.add("DataNode");
            listNodeGroupsAssignedRoles3.add("NodeManager");
            listNodeGroupsAssignedRoles3.add("RegionServer");
            listNodeGroupsAssignedRoles3.add("Flume:1");
            listNodeGroupsAssignedRoles3.add("GraphServer");
            listNodeGroupsAssignedRoles3.add("KerberosClient");
            listNodeGroupsAssignedRoles3.add("SlapdClient");
            listNodeGroupsAssignedRoles3.add("meta");
            listNodeGroupsAssignedRoles3.add("HSBroker:1,2");
            Volume dataVolumeNodeGroups3 = new Volume();
            dataVolumeNodeGroups3.withType("SAS")
                .withSize(600);
            Volume rootVolumeNodeGroups3 = new Volume();
            rootVolumeNodeGroups3.withType("SAS")
                .withSize(480);
            List<String> listNodeGroupsAssignedRoles4 = new ArrayList<>();
            listNodeGroupsAssignedRoles4.add("OMSServer:1,2");
            listNodeGroupsAssignedRoles4.add("SlapdServer:5,6");
            listNodeGroupsAssignedRoles4.add("KerberosServer:5,6");
            listNodeGroupsAssignedRoles4.add("KerberosAdmin:5,6");
            listNodeGroupsAssignedRoles4.add("quorumpeer:5,6,7,8,9");
            listNodeGroupsAssignedRoles4.add("NameNode:3,4");
            listNodeGroupsAssignedRoles4.add("Zkfc:3,4");
            listNodeGroupsAssignedRoles4.add("JournalNode:5,6,7");
            listNodeGroupsAssignedRoles4.add("ResourceManager:8,9");
            listNodeGroupsAssignedRoles4.add("JobHistoryServer:8");
            listNodeGroupsAssignedRoles4.add("DBServer:8,9");
            listNodeGroupsAssignedRoles4.add("Hue:8,9");
            listNodeGroupsAssignedRoles4.add("FlinkResource:3,4");
            listNodeGroupsAssignedRoles4.add("LoaderServer:3,5");
            listNodeGroupsAssignedRoles4.add("MetaStore:8,9");
            listNodeGroupsAssignedRoles4.add("WebHCat:5");
            listNodeGroupsAssignedRoles4.add("HiveServer:8,9");
            listNodeGroupsAssignedRoles4.add("HMaster:8,9");
            listNodeGroupsAssignedRoles4.add("FTP-Server:3,4");
            listNodeGroupsAssignedRoles4.add("MonitorServer:3,4");
            listNodeGroupsAssignedRoles4.add("Nimbus:8,9");
            listNodeGroupsAssignedRoles4.add("UI:8,9");
            listNodeGroupsAssignedRoles4.add("JDBCServer2x:8,9");
            listNodeGroupsAssignedRoles4.add("JobHistory2x:8,9");
            listNodeGroupsAssignedRoles4.add("SparkResource2x:5,6,7");
            listNodeGroupsAssignedRoles4.add("oozie:4,5");
            listNodeGroupsAssignedRoles4.add("EsMaster:7,8,9");
            listNodeGroupsAssignedRoles4.add("LoadBalancer:8,9");
            listNodeGroupsAssignedRoles4.add("TezUI:5,6");
            listNodeGroupsAssignedRoles4.add("TimelineServer:5");
            listNodeGroupsAssignedRoles4.add("RangerAdmin:4,5");
            listNodeGroupsAssignedRoles4.add("UserSync:5");
            listNodeGroupsAssignedRoles4.add("TagSync:5");
            listNodeGroupsAssignedRoles4.add("KerberosClient");
            listNodeGroupsAssignedRoles4.add("SlapdClient");
            listNodeGroupsAssignedRoles4.add("meta");
            listNodeGroupsAssignedRoles4.add("HSBroker:5");
            listNodeGroupsAssignedRoles4.add("HSConsole:3,4");
            listNodeGroupsAssignedRoles4.add("FlinkResource:3,4");
            Volume dataVolumeNodeGroups4 = new Volume();
            dataVolumeNodeGroups4.withType("SAS")
                .withSize(600);
            Volume rootVolumeNodeGroups4 = new Volume();
            rootVolumeNodeGroups4.withType("SAS")
                .withSize(480);
            List<NodeGroupV2> listbodyNodeGroups = new ArrayList<>();
            listbodyNodeGroups.add(
                new NodeGroupV2()
                    .withGroupName("master_node_default_group")
                    .withNodeNum(9)
                    .withNodeSize("rc3.4xlarge.4.linux.bigdata")
                    .withRootVolume(rootVolumeNodeGroups4)
                    .withDataVolume(dataVolumeNodeGroups4)
                    .withDataVolumeCount(1)
                    .withAssignedRoles(listNodeGroupsAssignedRoles4)
            );
            listbodyNodeGroups.add(
                new NodeGroupV2()
                    .withGroupName("node_group_1")
                    .withNodeNum(3)
                    .withNodeSize("rc3.4xlarge.4.linux.bigdata")
                    .withRootVolume(rootVolumeNodeGroups3)
                    .withDataVolume(dataVolumeNodeGroups3)
                    .withDataVolumeCount(1)
                    .withAssignedRoles(listNodeGroupsAssignedRoles3)
            );
            listbodyNodeGroups.add(
                new NodeGroupV2()
                    .withGroupName("node_group_2")
                    .withNodeNum(3)
                    .withNodeSize("rc3.4xlarge.4.linux.bigdata")
                    .withRootVolume(rootVolumeNodeGroups2)
                    .withDataVolume(dataVolumeNodeGroups2)
                    .withDataVolumeCount(1)
                    .withAssignedRoles(listNodeGroupsAssignedRoles2)
            );
            listbodyNodeGroups.add(
                new NodeGroupV2()
                    .withGroupName("node_group_3")
                    .withNodeNum(3)
                    .withNodeSize("rc3.4xlarge.4.linux.bigdata")
                    .withRootVolume(rootVolumeNodeGroups1)
                    .withDataVolume(dataVolumeNodeGroups1)
                    .withDataVolumeCount(1)
                    .withAssignedRoles(listNodeGroupsAssignedRoles1)
            );
            listbodyNodeGroups.add(
                new NodeGroupV2()
                    .withGroupName("node_group_4")
                    .withNodeNum(3)
                    .withNodeSize("rc3.4xlarge.4.linux.bigdata")
                    .withRootVolume(rootVolumeNodeGroups)
                    .withDataVolume(dataVolumeNodeGroups)
                    .withDataVolumeCount(1)
                    .withAssignedRoles(listNodeGroupsAssignedRoles)
            );
            List<Tag> listbodyTags = new ArrayList<>();
            listbodyTags.add(
                new Tag()
                    .withKey("aaa")
                    .withValue("111")
            );
            listbodyTags.add(
                new Tag()
                    .withKey("bbb")
                    .withValue("222")
            );
            ChargeInfo chargeInfobody = new ChargeInfo();
            chargeInfobody.withChargeMode("postPaid");
            body.withNodeGroups(listbodyNodeGroups);
            body.withLogCollection(CreateClusterReqV2.LogCollectionEnum.NUMBER_1);
            body.withTags(listbodyTags);
            body.withTemplateId("mgmt_control_data_separated_v2");
            body.withMrsEcsDefaultAgency("MRS_ECS_DEFAULT_AGENCY");
            body.withNodeRootPassword("your password");
            body.withLoginMode("PASSWORD");
            body.withManagerAdminPassword("your password");
            body.withSafeMode("KERBEROS");
            body.withAvailabilityZone("");
            body.withComponents("Hadoop,Spark2x,HBase,Hive,Hue,Loader,Kafka,Storm,Flume,Flink,Oozie,Ranger,Tez");
            body.withSubnetName("subnet");
            body.withSubnetId("1f8c5ca6-1f66-4096-bb00-baf175954f6e");
            body.withVpcName("vpc-37cd");
            body.withRegion("");
            body.withChargeInfo(chargeInfobody);
            body.withClusterType("CUSTOM");
            body.withClusterName("mrs_jdRU_dm02");
            body.withClusterVersion("MRS 3.1.0");
            request.withBody(body);
            try {
                CreateClusterResponse response = client.createCluster(request);
                System.out.println(response.toString());
            } catch (ConnectionException e) {
                e.printStackTrace();
            } catch (RequestTimeoutException e) {
                e.printStackTrace();
            } catch (ServiceResponseException e) {
                e.printStackTrace();
                System.out.println(e.getHttpStatusCode());
                System.out.println(e.getRequestId());
                System.out.println(e.getErrorCode());
                System.out.println(e.getErrorMsg());
            }
        }
    }
    
  • 创建一个分析集群,集群版本号为MRS 3.1.0。包含一个Master节点组,节点数为2;一个Core节点组,节点数为3;一个Task节点组,节点数为3。每周一的12点至13点开启弹性伸缩。Hive组件的初始配置hive.union.data.type.incompatible.enable修改为true,dfs.replication修改为4。

      1
      2
      3
      4
      5
      6
      7
      8
      9
     10
     11
     12
     13
     14
     15
     16
     17
     18
     19
     20
     21
     22
     23
     24
     25
     26
     27
     28
     29
     30
     31
     32
     33
     34
     35
     36
     37
     38
     39
     40
     41
     42
     43
     44
     45
     46
     47
     48
     49
     50
     51
     52
     53
     54
     55
     56
     57
     58
     59
     60
     61
     62
     63
     64
     65
     66
     67
     68
     69
     70
     71
     72
     73
     74
     75
     76
     77
     78
     79
     80
     81
     82
     83
     84
     85
     86
     87
     88
     89
     90
     91
     92
     93
     94
     95
     96
     97
     98
     99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    159
    160
    161
    162
    163
    164
    165
    166
    167
    168
    169
    170
    171
    172
    173
    174
    175
    176
    177
    178
    179
    180
    181
    182
    183
    184
    185
    186
    187
    # coding: utf-8
    
    import os
    from huaweicloudsdkcore.auth.credentials import BasicCredentials
    from huaweicloudsdkmrs.v2.region.mrs_region import MrsRegion
    from huaweicloudsdkcore.exceptions import exceptions
    from huaweicloudsdkmrs.v2 import *
    
    if __name__ == "__main__":
        # The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
        # In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
        ak = os.environ["CLOUD_SDK_AK"]
        sk = os.environ["CLOUD_SDK_SK"]
        projectId = "{project_id}"
    
        credentials = BasicCredentials(ak, sk, projectId)
    
        client = MrsClient.new_builder() \
            .with_credentials(credentials) \
            .with_region(MrsRegion.value_of("<YOUR REGION>")) \
            .build()
    
        try:
            request = CreateClusterRequest()
            listConfigsComponentConfigs = [
                Config(
                    key="hive.union.data.type.incompatible.enable",
                    value="true",
                    config_file_name="hive-site.xml"
                ),
                Config(
                    key="dfs.replication",
                    value="4",
                    config_file_name="hdfs-site.xml"
                )
            ]
            listComponentConfigsbody = [
                ComponentConfig(
                    component_name="Hive",
                    configs=listConfigsComponentConfigs
                )
            ]
            listNodesExecScripts = [
                "master_node_default_group",
                "core_node_analysis_group",
                "task_node_analysis_group"
            ]
            listExecScriptsAutoScalingPolicy = [
                ScaleScript(
                    name="test",
                    uri="s3a://obs-mrstest/bootstrap/basic_success.sh",
                    parameters="",
                    nodes=listNodesExecScripts,
                    active_master=False,
                    fail_action="continue",
                    action_stage="before_scale_out"
                )
            ]
            triggerRules = Trigger(
                metric_name="YARNAppRunning",
                metric_value="100",
                comparison_operator="GTOE",
                evaluation_periods=1
            )
            listRulesAutoScalingPolicy = [
                Rule(
                    name="default-expand-1",
                    description="",
                    adjustment_type="scale_out",
                    cool_down_minutes=5,
                    scaling_adjustment=1,
                    trigger=triggerRules
                )
            ]
            listEffectiveDaysResourcesPlans = [
                "MONDAY"
            ]
            listResourcesPlansAutoScalingPolicy = [
                ResourcesPlan(
                    period_type="daily",
                    start_time="12:00",
                    end_time="13:00",
                    min_capacity=2,
                    max_capacity=3,
                    effective_days=listEffectiveDaysResourcesPlans
                )
            ]
            autoScalingPolicyNodeGroups = AutoScalingPolicy(
                auto_scaling_enable=True,
                min_capacity=0,
                max_capacity=1,
                resources_plans=listResourcesPlansAutoScalingPolicy,
                rules=listRulesAutoScalingPolicy,
                exec_scripts=listExecScriptsAutoScalingPolicy
            )
            dataVolumeNodeGroups = Volume(
                type="SAS",
                size=600
            )
            rootVolumeNodeGroups = Volume(
                type="SAS",
                size=480
            )
            dataVolumeNodeGroups1 = Volume(
                type="SAS",
                size=600
            )
            rootVolumeNodeGroups1 = Volume(
                type="SAS",
                size=480
            )
            dataVolumeNodeGroups2 = Volume(
                type="SAS",
                size=600
            )
            rootVolumeNodeGroups2 = Volume(
                type="SAS",
                size=480
            )
            listNodeGroupsbody = [
                NodeGroupV2(
                    group_name="master_node_default_group",
                    node_num=2,
                    node_size="rc3.4xlarge.4.linux.bigdata",
                    root_volume=rootVolumeNodeGroups2,
                    data_volume=dataVolumeNodeGroups2,
                    data_volume_count=1
                ),
                NodeGroupV2(
                    group_name="core_node_analysis_group",
                    node_num=3,
                    node_size="rc3.4xlarge.4.linux.bigdata",
                    root_volume=rootVolumeNodeGroups1,
                    data_volume=dataVolumeNodeGroups1,
                    data_volume_count=1
                ),
                NodeGroupV2(
                    group_name="task_node_analysis_group",
                    node_num=3,
                    node_size="rc3.4xlarge.4.linux.bigdata",
                    root_volume=rootVolumeNodeGroups,
                    data_volume=dataVolumeNodeGroups,
                    data_volume_count=1,
                    auto_scaling_policy=autoScalingPolicyNodeGroups
                )
            ]
            listTagsbody = [
                Tag(
                    key="tag1",
                    value="111"
                ),
                Tag(
                    key="tag2",
                    value="222"
                )
            ]
            chargeInfobody = ChargeInfo(
                charge_mode="postPaid"
            )
            request.body = CreateClusterReqV2(
                component_configs=listComponentConfigsbody,
                node_groups=listNodeGroupsbody,
                log_collection=1,
                tags=listTagsbody,
                mrs_ecs_default_agency="MRS_ECS_DEFAULT_AGENCY",
                node_root_password="your password",
                login_mode="PASSWORD",
                manager_admin_password="your password",
                safe_mode="KERBEROS",
                availability_zone="",
                components="Hadoop,Spark2x,HBase,Hive,Hue,Loader,Flink,Oozie,Ranger,Tez",
                subnet_name="subnet",
                subnet_id="1f8c5ca6-1f66-4096-bb00-baf175954f6e",
                vpc_name="vpc-37cd",
                region="",
                charge_info=chargeInfobody,
                cluster_type="ANALYSIS",
                cluster_name="mrs_DyJA_dm",
                cluster_version="MRS 3.1.0"
            )
            response = client.create_cluster(request)
            print(response)
        except exceptions.ClientRequestException as e:
            print(e.status_code)
            print(e.request_id)
            print(e.error_code)
            print(e.error_msg)
    
  • 创建一个流式集群,集群版本号为MRS 3.1.0。包含一个Master节点组,节点数为2;一个Core节点组,节点数为3,一个Task节点组,节点数为0。每周一的12点至13点开启弹性伸缩。

      1
      2
      3
      4
      5
      6
      7
      8
      9
     10
     11
     12
     13
     14
     15
     16
     17
     18
     19
     20
     21
     22
     23
     24
     25
     26
     27
     28
     29
     30
     31
     32
     33
     34
     35
     36
     37
     38
     39
     40
     41
     42
     43
     44
     45
     46
     47
     48
     49
     50
     51
     52
     53
     54
     55
     56
     57
     58
     59
     60
     61
     62
     63
     64
     65
     66
     67
     68
     69
     70
     71
     72
     73
     74
     75
     76
     77
     78
     79
     80
     81
     82
     83
     84
     85
     86
     87
     88
     89
     90
     91
     92
     93
     94
     95
     96
     97
     98
     99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    # coding: utf-8
    
    import os
    from huaweicloudsdkcore.auth.credentials import BasicCredentials
    from huaweicloudsdkmrs.v2.region.mrs_region import MrsRegion
    from huaweicloudsdkcore.exceptions import exceptions
    from huaweicloudsdkmrs.v2 import *
    
    if __name__ == "__main__":
        # The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
        # In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
        ak = os.environ["CLOUD_SDK_AK"]
        sk = os.environ["CLOUD_SDK_SK"]
        projectId = "{project_id}"
    
        credentials = BasicCredentials(ak, sk, projectId)
    
        client = MrsClient.new_builder() \
            .with_credentials(credentials) \
            .with_region(MrsRegion.value_of("<YOUR REGION>")) \
            .build()
    
        try:
            request = CreateClusterRequest()
            triggerRules = Trigger(
                metric_name="StormSlotAvailablePercentage",
                metric_value="100",
                comparison_operator="LTOE",
                evaluation_periods=1
            )
            listRulesAutoScalingPolicy = [
                Rule(
                    name="default-expand-1",
                    description="",
                    adjustment_type="scale_out",
                    cool_down_minutes=5,
                    scaling_adjustment=1,
                    trigger=triggerRules
                )
            ]
            listEffectiveDaysResourcesPlans = [
                "MONDAY"
            ]
            listResourcesPlansAutoScalingPolicy = [
                ResourcesPlan(
                    period_type="daily",
                    start_time="12:00",
                    end_time="13:00",
                    min_capacity=2,
                    max_capacity=3,
                    effective_days=listEffectiveDaysResourcesPlans
                )
            ]
            autoScalingPolicyNodeGroups = AutoScalingPolicy(
                auto_scaling_enable=True,
                min_capacity=0,
                max_capacity=1,
                resources_plans=listResourcesPlansAutoScalingPolicy,
                rules=listRulesAutoScalingPolicy
            )
            dataVolumeNodeGroups = Volume(
                type="SAS",
                size=600
            )
            rootVolumeNodeGroups = Volume(
                type="SAS",
                size=480
            )
            dataVolumeNodeGroups1 = Volume(
                type="SAS",
                size=600
            )
            rootVolumeNodeGroups1 = Volume(
                type="SAS",
                size=480
            )
            dataVolumeNodeGroups2 = Volume(
                type="SAS",
                size=600
            )
            rootVolumeNodeGroups2 = Volume(
                type="SAS",
                size=480
            )
            listNodeGroupsbody = [
                NodeGroupV2(
                    group_name="master_node_default_group",
                    node_num=2,
                    node_size="rc3.4xlarge.4.linux.bigdata",
                    root_volume=rootVolumeNodeGroups2,
                    data_volume=dataVolumeNodeGroups2,
                    data_volume_count=1
                ),
                NodeGroupV2(
                    group_name="core_node_streaming_group",
                    node_num=3,
                    node_size="rc3.4xlarge.4.linux.bigdata",
                    root_volume=rootVolumeNodeGroups1,
                    data_volume=dataVolumeNodeGroups1,
                    data_volume_count=1
                ),
                NodeGroupV2(
                    group_name="task_node_streaming_group",
                    node_num=0,
                    node_size="rc3.4xlarge.4.linux.bigdata",
                    root_volume=rootVolumeNodeGroups,
                    data_volume=dataVolumeNodeGroups,
                    data_volume_count=1,
                    auto_scaling_policy=autoScalingPolicyNodeGroups
                )
            ]
            listTagsbody = [
                Tag(
                    key="tag1",
                    value="111"
                ),
                Tag(
                    key="tag2",
                    value="222"
                )
            ]
            chargeInfobody = ChargeInfo(
                charge_mode="postPaid"
            )
            request.body = CreateClusterReqV2(
                node_groups=listNodeGroupsbody,
                log_collection=1,
                tags=listTagsbody,
                mrs_ecs_default_agency="MRS_ECS_DEFAULT_AGENCY",
                node_root_password="your password",
                login_mode="PASSWORD",
                manager_admin_password="your password",
                safe_mode="KERBEROS",
                availability_zone="",
                components="Storm,Kafka,Flume,Ranger",
                subnet_name="subnet",
                subnet_id="1f8c5ca6-1f66-4096-bb00-baf175954f6e",
                vpc_name="vpc-37cd",
                region="",
                charge_info=chargeInfobody,
                cluster_type="STREAMING",
                cluster_name="mrs_Dokle_dm",
                cluster_version="MRS 3.1.0"
            )
            response = client.create_cluster(request)
            print(response)
        except exceptions.ClientRequestException as e:
            print(e.status_code)
            print(e.request_id)
            print(e.error_code)
            print(e.error_msg)
    
  • 创建一个混合集群,集群版本号为MRS 3.1.0。其中包含一个Master节点组,节点数为2;两个Core节点组,每个Core节点组的节点数均为3;两个Task节点组,一个Task节点组节点数为1,另一个节点数为0。

      1
      2
      3
      4
      5
      6
      7
      8
      9
     10
     11
     12
     13
     14
     15
     16
     17
     18
     19
     20
     21
     22
     23
     24
     25
     26
     27
     28
     29
     30
     31
     32
     33
     34
     35
     36
     37
     38
     39
     40
     41
     42
     43
     44
     45
     46
     47
     48
     49
     50
     51
     52
     53
     54
     55
     56
     57
     58
     59
     60
     61
     62
     63
     64
     65
     66
     67
     68
     69
     70
     71
     72
     73
     74
     75
     76
     77
     78
     79
     80
     81
     82
     83
     84
     85
     86
     87
     88
     89
     90
     91
     92
     93
     94
     95
     96
     97
     98
     99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    # coding: utf-8
    
    import os
    from huaweicloudsdkcore.auth.credentials import BasicCredentials
    from huaweicloudsdkmrs.v2.region.mrs_region import MrsRegion
    from huaweicloudsdkcore.exceptions import exceptions
    from huaweicloudsdkmrs.v2 import *
    
    if __name__ == "__main__":
        # The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
        # In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
        ak = os.environ["CLOUD_SDK_AK"]
        sk = os.environ["CLOUD_SDK_SK"]
        projectId = "{project_id}"
    
        credentials = BasicCredentials(ak, sk, projectId)
    
        client = MrsClient.new_builder() \
            .with_credentials(credentials) \
            .with_region(MrsRegion.value_of("<YOUR REGION>")) \
            .build()
    
        try:
            request = CreateClusterRequest()
            dataVolumeNodeGroups = Volume(
                type="SAS",
                size=600
            )
            rootVolumeNodeGroups = Volume(
                type="SAS",
                size=480
            )
            dataVolumeNodeGroups1 = Volume(
                type="SAS",
                size=600
            )
            rootVolumeNodeGroups1 = Volume(
                type="SAS",
                size=480
            )
            dataVolumeNodeGroups2 = Volume(
                type="SAS",
                size=600
            )
            rootVolumeNodeGroups2 = Volume(
                type="SAS",
                size=480
            )
            dataVolumeNodeGroups3 = Volume(
                type="SAS",
                size=600
            )
            rootVolumeNodeGroups3 = Volume(
                type="SAS",
                size=480
            )
            dataVolumeNodeGroups4 = Volume(
                type="SAS",
                size=600
            )
            rootVolumeNodeGroups4 = Volume(
                type="SAS",
                size=480
            )
            listNodeGroupsbody = [
                NodeGroupV2(
                    group_name="master_node_default_group",
                    node_num=2,
                    node_size="Sit3.4xlarge.4.linux.bigdata",
                    root_volume=rootVolumeNodeGroups4,
                    data_volume=dataVolumeNodeGroups4,
                    data_volume_count=1
                ),
                NodeGroupV2(
                    group_name="core_node_streaming_group",
                    node_num=3,
                    node_size="Sit3.4xlarge.4.linux.bigdata",
                    root_volume=rootVolumeNodeGroups3,
                    data_volume=dataVolumeNodeGroups3,
                    data_volume_count=1
                ),
                NodeGroupV2(
                    group_name="core_node_analysis_group",
                    node_num=3,
                    node_size="Sit3.4xlarge.4.linux.bigdata",
                    root_volume=rootVolumeNodeGroups2,
                    data_volume=dataVolumeNodeGroups2,
                    data_volume_count=1
                ),
                NodeGroupV2(
                    group_name="task_node_analysis_group",
                    node_num=1,
                    node_size="Sit3.4xlarge.4.linux.bigdata",
                    root_volume=rootVolumeNodeGroups1,
                    data_volume=dataVolumeNodeGroups1,
                    data_volume_count=1
                ),
                NodeGroupV2(
                    group_name="task_node_streaming_group",
                    node_num=0,
                    node_size="Sit3.4xlarge.4.linux.bigdata",
                    root_volume=rootVolumeNodeGroups,
                    data_volume=dataVolumeNodeGroups,
                    data_volume_count=1
                )
            ]
            listTagsbody = [
                Tag(
                    key="tag1",
                    value="111"
                ),
                Tag(
                    key="tag2",
                    value="222"
                )
            ]
            chargeInfobody = ChargeInfo(
                charge_mode="postPaid"
            )
            request.body = CreateClusterReqV2(
                node_groups=listNodeGroupsbody,
                log_collection=1,
                tags=listTagsbody,
                mrs_ecs_default_agency="MRS_ECS_DEFAULT_AGENCY",
                node_root_password="your password",
                login_mode="PASSWORD",
                manager_admin_password="your password",
                safe_mode="KERBEROS",
                availability_zone="",
                components="Hadoop,Spark2x,HBase,Hive,Hue,Loader,Kafka,Storm,Flume,Flink,Oozie,Ranger,Tez",
                subnet_name="subnet",
                subnet_id="1f8c5ca6-1f66-4096-bb00-baf175954f6e",
                vpc_name="vpc-37cd",
                region="",
                charge_info=chargeInfobody,
                cluster_type="MIXED",
                cluster_name="mrs_onmm_dm",
                cluster_version="MRS 3.1.0"
            )
            response = client.create_cluster(request)
            print(response)
        except exceptions.ClientRequestException as e:
            print(e.status_code)
            print(e.request_id)
            print(e.error_code)
            print(e.error_msg)
    
  • 创建自定义管控合设集群,集群版本号为MRS 3.1.0。包含一个Master节点组,节点数为3;两个Core节点组,一个节点数为3,另一个节点数为1。

      1
      2
      3
      4
      5
      6
      7
      8
      9
     10
     11
     12
     13
     14
     15
     16
     17
     18
     19
     20
     21
     22
     23
     24
     25
     26
     27
     28
     29
     30
     31
     32
     33
     34
     35
     36
     37
     38
     39
     40
     41
     42
     43
     44
     45
     46
     47
     48
     49
     50
     51
     52
     53
     54
     55
     56
     57
     58
     59
     60
     61
     62
     63
     64
     65
     66
     67
     68
     69
     70
     71
     72
     73
     74
     75
     76
     77
     78
     79
     80
     81
     82
     83
     84
     85
     86
     87
     88
     89
     90
     91
     92
     93
     94
     95
     96
     97
     98
     99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    159
    160
    161
    162
    163
    164
    165
    166
    167
    168
    169
    170
    171
    172
    173
    174
    175
    176
    177
    178
    179
    180
    181
    182
    183
    184
    185
    186
    187
    188
    189
    190
    191
    192
    # coding: utf-8
    
    import os
    from huaweicloudsdkcore.auth.credentials import BasicCredentials
    from huaweicloudsdkmrs.v2.region.mrs_region import MrsRegion
    from huaweicloudsdkcore.exceptions import exceptions
    from huaweicloudsdkmrs.v2 import *
    
    if __name__ == "__main__":
        # The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
        # In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
        ak = os.environ["CLOUD_SDK_AK"]
        sk = os.environ["CLOUD_SDK_SK"]
        projectId = "{project_id}"
    
        credentials = BasicCredentials(ak, sk, projectId)
    
        client = MrsClient.new_builder() \
            .with_credentials(credentials) \
            .with_region(MrsRegion.value_of("<YOUR REGION>")) \
            .build()
    
        try:
            request = CreateClusterRequest()
            listAssignedRolesNodeGroups = [
                "NodeManager",
                "KerberosClient",
                "SlapdClient",
                "meta",
                "FlinkResource"
            ]
            dataVolumeNodeGroups = Volume(
                type="SAS",
                size=600
            )
            rootVolumeNodeGroups = Volume(
                type="SAS",
                size=480
            )
            listAssignedRolesNodeGroups1 = [
                "DataNode",
                "NodeManager",
                "RegionServer",
                "Flume:1",
                "Broker",
                "Supervisor",
                "Logviewer",
                "HBaseIndexer",
                "KerberosClient",
                "SlapdClient",
                "meta",
                "HSBroker:1,2",
                "ThriftServer",
                "ThriftServer1",
                "RESTServer",
                "FlinkResource"
            ]
            dataVolumeNodeGroups1 = Volume(
                type="SAS",
                size=600
            )
            rootVolumeNodeGroups1 = Volume(
                type="SAS",
                size=480
            )
            listAssignedRolesNodeGroups2 = [
                "OMSServer:1,2",
                "SlapdServer:1,2",
                "KerberosServer:1,2",
                "KerberosAdmin:1,2",
                "quorumpeer:1,2,3",
                "NameNode:2,3",
                "Zkfc:2,3",
                "JournalNode:1,2,3",
                "ResourceManager:2,3",
                "JobHistoryServer:2,3",
                "DBServer:1,3",
                "Hue:1,3",
                "LoaderServer:1,3",
                "MetaStore:1,2,3",
                "WebHCat:1,2,3",
                "HiveServer:1,2,3",
                "HMaster:2,3",
                "MonitorServer:1,2",
                "Nimbus:1,2",
                "UI:1,2",
                "JDBCServer2x:1,2,3",
                "JobHistory2x:2,3",
                "SparkResource2x:1,2,3",
                "oozie:2,3",
                "LoadBalancer:2,3",
                "TezUI:1,3",
                "TimelineServer:3",
                "RangerAdmin:1,2",
                "UserSync:2",
                "TagSync:2",
                "KerberosClient",
                "SlapdClient",
                "meta",
                "HSConsole:2,3",
                "FlinkResource:1,2,3",
                "DataNode:1,2,3",
                "NodeManager:1,2,3",
                "IndexServer2x:1,2",
                "ThriftServer:1,2,3",
                "RegionServer:1,2,3",
                "ThriftServer1:1,2,3",
                "RESTServer:1,2,3",
                "Broker:1,2,3",
                "Supervisor:1,2,3",
                "Logviewer:1,2,3",
                "Flume:1,2,3",
                "HSBroker:1,2,3"
            ]
            dataVolumeNodeGroups2 = Volume(
                type="SAS",
                size=600
            )
            rootVolumeNodeGroups2 = Volume(
                type="SAS",
                size=480
            )
            listNodeGroupsbody = [
                NodeGroupV2(
                    group_name="master_node_default_group",
                    node_num=3,
                    node_size="Sit3.4xlarge.4.linux.bigdata",
                    root_volume=rootVolumeNodeGroups2,
                    data_volume=dataVolumeNodeGroups2,
                    data_volume_count=1,
                    assigned_roles=listAssignedRolesNodeGroups2
                ),
                NodeGroupV2(
                    group_name="node_group_1",
                    node_num=3,
                    node_size="Sit3.4xlarge.4.linux.bigdata",
                    root_volume=rootVolumeNodeGroups1,
                    data_volume=dataVolumeNodeGroups1,
                    data_volume_count=1,
                    assigned_roles=listAssignedRolesNodeGroups1
                ),
                NodeGroupV2(
                    group_name="node_group_2",
                    node_num=1,
                    node_size="Sit3.4xlarge.4.linux.bigdata",
                    root_volume=rootVolumeNodeGroups,
                    data_volume=dataVolumeNodeGroups,
                    data_volume_count=1,
                    assigned_roles=listAssignedRolesNodeGroups
                )
            ]
            listTagsbody = [
                Tag(
                    key="tag1",
                    value="111"
                ),
                Tag(
                    key="tag2",
                    value="222"
                )
            ]
            chargeInfobody = ChargeInfo(
                charge_mode="postPaid"
            )
            request.body = CreateClusterReqV2(
                node_groups=listNodeGroupsbody,
                log_collection=1,
                tags=listTagsbody,
                template_id="mgmt_control_combined_v2",
                mrs_ecs_default_agency="MRS_ECS_DEFAULT_AGENCY",
                node_root_password="your password",
                login_mode="PASSWORD",
                manager_admin_password="your password",
                safe_mode="KERBEROS",
                availability_zone="",
                components="Hadoop,Spark2x,HBase,Hive,Hue,Loader,Kafka,Storm,Flume,Flink,Oozie,Ranger,Tez",
                subnet_name="subnet",
                subnet_id="1f8c5ca6-1f66-4096-bb00-baf175954f6e",
                vpc_name="vpc-37cd",
                region="",
                charge_info=chargeInfobody,
                cluster_type="CUSTOM",
                cluster_name="mrs_heshe_dm",
                cluster_version="MRS 3.1.0"
            )
            response = client.create_cluster(request)
            print(response)
        except exceptions.ClientRequestException as e:
            print(e.status_code)
            print(e.request_id)
            print(e.error_code)
            print(e.error_msg)
    
  • 创建自定义管控分设集群,集群版本号为MRS 3.1.0。包含一个Master节点组,节点数为5;一个Core节点组,节点数为3。

      1
      2
      3
      4
      5
      6
      7
      8
      9
     10
     11
     12
     13
     14
     15
     16
     17
     18
     19
     20
     21
     22
     23
     24
     25
     26
     27
     28
     29
     30
     31
     32
     33
     34
     35
     36
     37
     38
     39
     40
     41
     42
     43
     44
     45
     46
     47
     48
     49
     50
     51
     52
     53
     54
     55
     56
     57
     58
     59
     60
     61
     62
     63
     64
     65
     66
     67
     68
     69
     70
     71
     72
     73
     74
     75
     76
     77
     78
     79
     80
     81
     82
     83
     84
     85
     86
     87
     88
     89
     90
     91
     92
     93
     94
     95
     96
     97
     98
     99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    159
    160
    161
    162
    163
    164
    165
    166
    167
    168
    # coding: utf-8
    
    import os
    from huaweicloudsdkcore.auth.credentials import BasicCredentials
    from huaweicloudsdkmrs.v2.region.mrs_region import MrsRegion
    from huaweicloudsdkcore.exceptions import exceptions
    from huaweicloudsdkmrs.v2 import *
    
    if __name__ == "__main__":
        # The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
        # In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
        ak = os.environ["CLOUD_SDK_AK"]
        sk = os.environ["CLOUD_SDK_SK"]
        projectId = "{project_id}"
    
        credentials = BasicCredentials(ak, sk, projectId)
    
        client = MrsClient.new_builder() \
            .with_credentials(credentials) \
            .with_region(MrsRegion.value_of("<YOUR REGION>")) \
            .build()
    
        try:
            request = CreateClusterRequest()
            listAssignedRolesNodeGroups = [
                "DataNode",
                "NodeManager",
                "RegionServer",
                "Flume:1",
                "Broker",
                "Supervisor",
                "Logviewer",
                "HBaseIndexer",
                "KerberosClient",
                "SlapdClient",
                "meta",
                "HSBroker:1,2",
                "ThriftServer",
                "ThriftServer1",
                "RESTServer",
                "FlinkResource"
            ]
            dataVolumeNodeGroups = Volume(
                type="SAS",
                size=600
            )
            rootVolumeNodeGroups = Volume(
                type="SAS",
                size=480
            )
            listAssignedRolesNodeGroups1 = [
                "OMSServer:1,2",
                "SlapdServer:3,4",
                "KerberosServer:3,4",
                "KerberosAdmin:3,4",
                "quorumpeer:3,4,5",
                "NameNode:4,5",
                "Zkfc:4,5",
                "JournalNode:1,2,3,4,5",
                "ResourceManager:4,5",
                "JobHistoryServer:4,5",
                "DBServer:3,5",
                "Hue:1,2",
                "LoaderServer:1,2",
                "MetaStore:1,2,3,4,5",
                "WebHCat:1,2,3,4,5",
                "HiveServer:1,2,3,4,5",
                "HMaster:4,5",
                "MonitorServer:1,2",
                "Nimbus:1,2",
                "UI:1,2",
                "JDBCServer2x:1,2,3,4,5",
                "JobHistory2x:4,5",
                "SparkResource2x:1,2,3,4,5",
                "oozie:1,2",
                "LoadBalancer:1,2",
                "TezUI:1,2",
                "TimelineServer:5",
                "RangerAdmin:1,2",
                "KerberosClient",
                "SlapdClient",
                "meta",
                "HSConsole:1,2",
                "FlinkResource:1,2,3,4,5",
                "DataNode:1,2,3,4,5",
                "NodeManager:1,2,3,4,5",
                "IndexServer2x:1,2",
                "ThriftServer:1,2,3,4,5",
                "RegionServer:1,2,3,4,5",
                "ThriftServer1:1,2,3,4,5",
                "RESTServer:1,2,3,4,5",
                "Broker:1,2,3,4,5",
                "Supervisor:1,2,3,4,5",
                "Logviewer:1,2,3,4,5",
                "Flume:1,2,3,4,5",
                "HBaseIndexer:1,2,3,4,5",
                "TagSync:1",
                "UserSync:1"
            ]
            dataVolumeNodeGroups1 = Volume(
                type="SAS",
                size=600
            )
            rootVolumeNodeGroups1 = Volume(
                type="SAS",
                size=480
            )
            listNodeGroupsbody = [
                NodeGroupV2(
                    group_name="master_node_default_group",
                    node_num=5,
                    node_size="rc3.4xlarge.4.linux.bigdata",
                    root_volume=rootVolumeNodeGroups1,
                    data_volume=dataVolumeNodeGroups1,
                    data_volume_count=1,
                    assigned_roles=listAssignedRolesNodeGroups1
                ),
                NodeGroupV2(
                    group_name="node_group_1",
                    node_num=3,
                    node_size="rc3.4xlarge.4.linux.bigdata",
                    root_volume=rootVolumeNodeGroups,
                    data_volume=dataVolumeNodeGroups,
                    data_volume_count=1,
                    assigned_roles=listAssignedRolesNodeGroups
                )
            ]
            listTagsbody = [
                Tag(
                    key="aaa",
                    value="111"
                ),
                Tag(
                    key="bbb",
                    value="222"
                )
            ]
            chargeInfobody = ChargeInfo(
                charge_mode="postPaid"
            )
            request.body = CreateClusterReqV2(
                node_groups=listNodeGroupsbody,
                log_collection=1,
                tags=listTagsbody,
                template_id="mgmt_control_separated_v2",
                mrs_ecs_default_agency="MRS_ECS_DEFAULT_AGENCY",
                node_root_password="your password",
                login_mode="PASSWORD",
                manager_admin_password="your password",
                safe_mode="KERBEROS",
                availability_zone="",
                components="Hadoop,Spark2x,HBase,Hive,Hue,Loader,Kafka,Storm,Flume,Flink,Oozie,Ranger,Tez",
                subnet_name="subnet",
                subnet_id="1f8c5ca6-1f66-4096-bb00-baf175954f6e",
                vpc_name="vpc-37cd",
                region="",
                charge_info=chargeInfobody,
                cluster_type="CUSTOM",
                cluster_name="mrs_jdRU_dm01",
                cluster_version="MRS 3.1.0"
            )
            response = client.create_cluster(request)
            print(response)
        except exceptions.ClientRequestException as e:
            print(e.status_code)
            print(e.request_id)
            print(e.error_code)
            print(e.error_msg)
    
  • 创建自定义数据分设集群,集群版本号为MRS 3.1.0。包含一个Master节点组,节点数为9;四个Core节点组,每个Core节点组的节点数均为3。

      1
      2
      3
      4
      5
      6
      7
      8
      9
     10
     11
     12
     13
     14
     15
     16
     17
     18
     19
     20
     21
     22
     23
     24
     25
     26
     27
     28
     29
     30
     31
     32
     33
     34
     35
     36
     37
     38
     39
     40
     41
     42
     43
     44
     45
     46
     47
     48
     49
     50
     51
     52
     53
     54
     55
     56
     57
     58
     59
     60
     61
     62
     63
     64
     65
     66
     67
     68
     69
     70
     71
     72
     73
     74
     75
     76
     77
     78
     79
     80
     81
     82
     83
     84
     85
     86
     87
     88
     89
     90
     91
     92
     93
     94
     95
     96
     97
     98
     99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    159
    160
    161
    162
    163
    164
    165
    166
    167
    168
    169
    170
    171
    172
    173
    174
    175
    176
    177
    178
    179
    180
    181
    182
    183
    184
    185
    186
    187
    188
    189
    190
    191
    192
    193
    194
    195
    196
    197
    198
    199
    200
    201
    202
    203
    204
    205
    206
    207
    208
    209
    210
    211
    212
    213
    214
    215
    216
    217
    218
    219
    220
    221
    222
    223
    224
    225
    226
    # coding: utf-8
    
    import os
    from huaweicloudsdkcore.auth.credentials import BasicCredentials
    from huaweicloudsdkmrs.v2.region.mrs_region import MrsRegion
    from huaweicloudsdkcore.exceptions import exceptions
    from huaweicloudsdkmrs.v2 import *
    
    if __name__ == "__main__":
        # The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
        # In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
        ak = os.environ["CLOUD_SDK_AK"]
        sk = os.environ["CLOUD_SDK_SK"]
        projectId = "{project_id}"
    
        credentials = BasicCredentials(ak, sk, projectId)
    
        client = MrsClient.new_builder() \
            .with_credentials(credentials) \
            .with_region(MrsRegion.value_of("<YOUR REGION>")) \
            .build()
    
        try:
            request = CreateClusterRequest()
            listAssignedRolesNodeGroups = [
                "Broker",
                "Supervisor",
                "Logviewer",
                "KerberosClient",
                "SlapdClient",
                "meta"
            ]
            dataVolumeNodeGroups = Volume(
                type="SAS",
                size=600
            )
            rootVolumeNodeGroups = Volume(
                type="SAS",
                size=480
            )
            listAssignedRolesNodeGroups1 = [
                "KerberosClient",
                "SlapdClient",
                "meta"
            ]
            dataVolumeNodeGroups1 = Volume(
                type="SAS",
                size=600
            )
            rootVolumeNodeGroups1 = Volume(
                type="SAS",
                size=480
            )
            listAssignedRolesNodeGroups2 = [
                "HBaseIndexer",
                "SolrServer[3]",
                "EsNode[2]",
                "KerberosClient",
                "SlapdClient",
                "meta",
                "SolrServerAdmin:1,2"
            ]
            dataVolumeNodeGroups2 = Volume(
                type="SAS",
                size=600
            )
            rootVolumeNodeGroups2 = Volume(
                type="SAS",
                size=480
            )
            listAssignedRolesNodeGroups3 = [
                "DataNode",
                "NodeManager",
                "RegionServer",
                "Flume:1",
                "GraphServer",
                "KerberosClient",
                "SlapdClient",
                "meta",
                "HSBroker:1,2"
            ]
            dataVolumeNodeGroups3 = Volume(
                type="SAS",
                size=600
            )
            rootVolumeNodeGroups3 = Volume(
                type="SAS",
                size=480
            )
            listAssignedRolesNodeGroups4 = [
                "OMSServer:1,2",
                "SlapdServer:5,6",
                "KerberosServer:5,6",
                "KerberosAdmin:5,6",
                "quorumpeer:5,6,7,8,9",
                "NameNode:3,4",
                "Zkfc:3,4",
                "JournalNode:5,6,7",
                "ResourceManager:8,9",
                "JobHistoryServer:8",
                "DBServer:8,9",
                "Hue:8,9",
                "FlinkResource:3,4",
                "LoaderServer:3,5",
                "MetaStore:8,9",
                "WebHCat:5",
                "HiveServer:8,9",
                "HMaster:8,9",
                "FTP-Server:3,4",
                "MonitorServer:3,4",
                "Nimbus:8,9",
                "UI:8,9",
                "JDBCServer2x:8,9",
                "JobHistory2x:8,9",
                "SparkResource2x:5,6,7",
                "oozie:4,5",
                "EsMaster:7,8,9",
                "LoadBalancer:8,9",
                "TezUI:5,6",
                "TimelineServer:5",
                "RangerAdmin:4,5",
                "UserSync:5",
                "TagSync:5",
                "KerberosClient",
                "SlapdClient",
                "meta",
                "HSBroker:5",
                "HSConsole:3,4",
                "FlinkResource:3,4"
            ]
            dataVolumeNodeGroups4 = Volume(
                type="SAS",
                size=600
            )
            rootVolumeNodeGroups4 = Volume(
                type="SAS",
                size=480
            )
            listNodeGroupsbody = [
                NodeGroupV2(
                    group_name="master_node_default_group",
                    node_num=9,
                    node_size="rc3.4xlarge.4.linux.bigdata",
                    root_volume=rootVolumeNodeGroups4,
                    data_volume=dataVolumeNodeGroups4,
                    data_volume_count=1,
                    assigned_roles=listAssignedRolesNodeGroups4
                ),
                NodeGroupV2(
                    group_name="node_group_1",
                    node_num=3,
                    node_size="rc3.4xlarge.4.linux.bigdata",
                    root_volume=rootVolumeNodeGroups3,
                    data_volume=dataVolumeNodeGroups3,
                    data_volume_count=1,
                    assigned_roles=listAssignedRolesNodeGroups3
                ),
                NodeGroupV2(
                    group_name="node_group_2",
                    node_num=3,
                    node_size="rc3.4xlarge.4.linux.bigdata",
                    root_volume=rootVolumeNodeGroups2,
                    data_volume=dataVolumeNodeGroups2,
                    data_volume_count=1,
                    assigned_roles=listAssignedRolesNodeGroups2
                ),
                NodeGroupV2(
                    group_name="node_group_3",
                    node_num=3,
                    node_size="rc3.4xlarge.4.linux.bigdata",
                    root_volume=rootVolumeNodeGroups1,
                    data_volume=dataVolumeNodeGroups1,
                    data_volume_count=1,
                    assigned_roles=listAssignedRolesNodeGroups1
                ),
                NodeGroupV2(
                    group_name="node_group_4",
                    node_num=3,
                    node_size="rc3.4xlarge.4.linux.bigdata",
                    root_volume=rootVolumeNodeGroups,
                    data_volume=dataVolumeNodeGroups,
                    data_volume_count=1,
                    assigned_roles=listAssignedRolesNodeGroups
                )
            ]
            listTagsbody = [
                Tag(
                    key="aaa",
                    value="111"
                ),
                Tag(
                    key="bbb",
                    value="222"
                )
            ]
            chargeInfobody = ChargeInfo(
                charge_mode="postPaid"
            )
            request.body = CreateClusterReqV2(
                node_groups=listNodeGroupsbody,
                log_collection=1,
                tags=listTagsbody,
                template_id="mgmt_control_data_separated_v2",
                mrs_ecs_default_agency="MRS_ECS_DEFAULT_AGENCY",
                node_root_password="your password",
                login_mode="PASSWORD",
                manager_admin_password="your password",
                safe_mode="KERBEROS",
                availability_zone="",
                components="Hadoop,Spark2x,HBase,Hive,Hue,Loader,Kafka,Storm,Flume,Flink,Oozie,Ranger,Tez",
                subnet_name="subnet",
                subnet_id="1f8c5ca6-1f66-4096-bb00-baf175954f6e",
                vpc_name="vpc-37cd",
                region="",
                charge_info=chargeInfobody,
                cluster_type="CUSTOM",
                cluster_name="mrs_jdRU_dm02",
                cluster_version="MRS 3.1.0"
            )
            response = client.create_cluster(request)
            print(response)
        except exceptions.ClientRequestException as e:
            print(e.status_code)
            print(e.request_id)
            print(e.error_code)
            print(e.error_msg)
    
  • 创建一个分析集群,集群版本号为MRS 3.1.0。包含一个Master节点组,节点数为2;一个Core节点组,节点数为3;一个Task节点组,节点数为3。每周一的12点至13点开启弹性伸缩。Hive组件的初始配置hive.union.data.type.incompatible.enable修改为true,dfs.replication修改为4。

      1
      2
      3
      4
      5
      6
      7
      8
      9
     10
     11
     12
     13
     14
     15
     16
     17
     18
     19
     20
     21
     22
     23
     24
     25
     26
     27
     28
     29
     30
     31
     32
     33
     34
     35
     36
     37
     38
     39
     40
     41
     42
     43
     44
     45
     46
     47
     48
     49
     50
     51
     52
     53
     54
     55
     56
     57
     58
     59
     60
     61
     62
     63
     64
     65
     66
     67
     68
     69
     70
     71
     72
     73
     74
     75
     76
     77
     78
     79
     80
     81
     82
     83
     84
     85
     86
     87
     88
     89
     90
     91
     92
     93
     94
     95
     96
     97
     98
     99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    159
    160
    161
    162
    163
    164
    165
    166
    167
    168
    169
    170
    171
    172
    173
    174
    175
    176
    177
    178
    179
    180
    181
    182
    183
    184
    185
    186
    187
    188
    189
    190
    191
    192
    193
    194
    195
    196
    197
    198
    199
    200
    201
    202
    203
    204
    package main
    
    import (
    	"fmt"
    	"github.com/huaweicloud/huaweicloud-sdk-go-v3/core/auth/basic"
        mrs "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/mrs/v2"
    	"github.com/huaweicloud/huaweicloud-sdk-go-v3/services/mrs/v2/model"
        region "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/mrs/v2/region"
    )
    
    func main() {
        // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
        // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
        ak := os.Getenv("CLOUD_SDK_AK")
        sk := os.Getenv("CLOUD_SDK_SK")
        projectId := "{project_id}"
    
        auth := basic.NewCredentialsBuilder().
            WithAk(ak).
            WithSk(sk).
            WithProjectId(projectId).
            Build()
    
        client := mrs.NewMrsClient(
            mrs.MrsClientBuilder().
                WithRegion(region.ValueOf("<YOUR REGION>")).
                WithCredential(auth).
                Build())
    
        request := &model.CreateClusterRequest{}
    	var listConfigsComponentConfigs = []model.Config{
            {
                Key: "hive.union.data.type.incompatible.enable",
                Value: "true",
                ConfigFileName: "hive-site.xml",
            },
            {
                Key: "dfs.replication",
                Value: "4",
                ConfigFileName: "hdfs-site.xml",
            },
        }
    	var listComponentConfigsbody = []model.ComponentConfig{
            {
                ComponentName: "Hive",
                Configs: &listConfigsComponentConfigs,
            },
        }
    	var listNodesExecScripts = []string{
            "master_node_default_group",
    	    "core_node_analysis_group",
    	    "task_node_analysis_group",
        }
    	parametersExecScripts:= ""
    	activeMasterExecScripts:= false
    	var listExecScriptsAutoScalingPolicy = []model.ScaleScript{
            {
                Name: "test",
                Uri: "s3a://obs-mrstest/bootstrap/basic_success.sh",
                Parameters: &parametersExecScripts,
                Nodes: listNodesExecScripts,
                ActiveMaster: &activeMasterExecScripts,
                FailAction: model.GetScaleScriptFailActionEnum().CONTINUE,
                ActionStage: model.GetScaleScriptActionStageEnum().BEFORE_SCALE_OUT,
            },
        }
    	comparisonOperatorTrigger:= "GTOE"
    	triggerRules := &model.Trigger{
    		MetricName: "YARNAppRunning",
    		MetricValue: "100",
    		ComparisonOperator: &comparisonOperatorTrigger,
    		EvaluationPeriods: int32(1),
    	}
    	descriptionRules:= ""
    	var listRulesAutoScalingPolicy = []model.Rule{
            {
                Name: "default-expand-1",
                Description: &descriptionRules,
                AdjustmentType: model.GetRuleAdjustmentTypeEnum().SCALE_OUT,
                CoolDownMinutes: int32(5),
                ScalingAdjustment: int32(1),
                Trigger: triggerRules,
            },
        }
    	var listEffectiveDaysResourcesPlans = []model.ResourcesPlanEffectiveDays{
            model.GetResourcesPlanEffectiveDaysEnum().MONDAY,
        }
    	var listResourcesPlansAutoScalingPolicy = []model.ResourcesPlan{
            {
                PeriodType: "daily",
                StartTime: "12:00",
                EndTime: "13:00",
                MinCapacity: int32(2),
                MaxCapacity: int32(3),
                EffectiveDays: &listEffectiveDaysResourcesPlans,
            },
        }
    	autoScalingPolicyNodeGroups := &model.AutoScalingPolicy{
    		AutoScalingEnable: true,
    		MinCapacity: int32(0),
    		MaxCapacity: int32(1),
    		ResourcesPlans: &listResourcesPlansAutoScalingPolicy,
    		Rules: &listRulesAutoScalingPolicy,
    		ExecScripts: &listExecScriptsAutoScalingPolicy,
    	}
    	dataVolumeNodeGroups := &model.Volume{
    		Type: "SAS",
    		Size: int32(600),
    	}
    	rootVolumeNodeGroups := &model.Volume{
    		Type: "SAS",
    		Size: int32(480),
    	}
    	dataVolumeNodeGroups1 := &model.Volume{
    		Type: "SAS",
    		Size: int32(600),
    	}
    	rootVolumeNodeGroups1 := &model.Volume{
    		Type: "SAS",
    		Size: int32(480),
    	}
    	dataVolumeNodeGroups2 := &model.Volume{
    		Type: "SAS",
    		Size: int32(600),
    	}
    	rootVolumeNodeGroups2 := &model.Volume{
    		Type: "SAS",
    		Size: int32(480),
    	}
    	dataVolumeCountNodeGroups:= int32(1)
    	dataVolumeCountNodeGroups1:= int32(1)
    	dataVolumeCountNodeGroups2:= int32(1)
    	var listNodeGroupsbody = []model.NodeGroupV2{
            {
                GroupName: "master_node_default_group",
                NodeNum: int32(2),
                NodeSize: "rc3.4xlarge.4.linux.bigdata",
                RootVolume: rootVolumeNodeGroups2,
                DataVolume: dataVolumeNodeGroups2,
                DataVolumeCount: &dataVolumeCountNodeGroups,
            },
            {
                GroupName: "core_node_analysis_group",
                NodeNum: int32(3),
                NodeSize: "rc3.4xlarge.4.linux.bigdata",
                RootVolume: rootVolumeNodeGroups1,
                DataVolume: dataVolumeNodeGroups1,
                DataVolumeCount: &dataVolumeCountNodeGroups1,
            },
            {
                GroupName: "task_node_analysis_group",
                NodeNum: int32(3),
                NodeSize: "rc3.4xlarge.4.linux.bigdata",
                RootVolume: rootVolumeNodeGroups,
                DataVolume: dataVolumeNodeGroups,
                DataVolumeCount: &dataVolumeCountNodeGroups2,
                AutoScalingPolicy: autoScalingPolicyNodeGroups,
            },
        }
    	var listTagsbody = []model.Tag{
            {
                Key: "tag1",
                Value: "111",
            },
            {
                Key: "tag2",
                Value: "222",
            },
        }
    	chargeInfobody := &model.ChargeInfo{
    		ChargeMode: "postPaid",
    	}
    	logCollectionCreateClusterReqV2:= model.GetCreateClusterReqV2LogCollectionEnum().E_1
    	mrsEcsDefaultAgencyCreateClusterReqV2:= "MRS_ECS_DEFAULT_AGENCY"
    	nodeRootPasswordCreateClusterReqV2:= "your password"
    	subnetIdCreateClusterReqV2:= "1f8c5ca6-1f66-4096-bb00-baf175954f6e"
    	request.Body = &model.CreateClusterReqV2{
    		ComponentConfigs: &listComponentConfigsbody,
    		NodeGroups: listNodeGroupsbody,
    		LogCollection: &logCollectionCreateClusterReqV2,
    		Tags: &listTagsbody,
    		MrsEcsDefaultAgency: &mrsEcsDefaultAgencyCreateClusterReqV2,
    		NodeRootPassword: &nodeRootPasswordCreateClusterReqV2,
    		LoginMode: "PASSWORD",
    		ManagerAdminPassword: "your password",
    		SafeMode: "KERBEROS",
    		AvailabilityZone: "",
    		Components: "Hadoop,Spark2x,HBase,Hive,Hue,Loader,Flink,Oozie,Ranger,Tez",
    		SubnetName: "subnet",
    		SubnetId: &subnetIdCreateClusterReqV2,
    		VpcName: "vpc-37cd",
    		Region: "",
    		ChargeInfo: chargeInfobody,
    		ClusterType: "ANALYSIS",
    		ClusterName: "mrs_DyJA_dm",
    		ClusterVersion: "MRS 3.1.0",
    	}
    	response, err := client.CreateCluster(request)
    	if err == nil {
            fmt.Printf("%+v\n", response)
        } else {
            fmt.Println(err)
        }
    }
    
  • 创建一个流式集群,集群版本号为MRS 3.1.0。包含一个Master节点组,节点数为2;一个Core节点组,节点数为3,一个Task节点组,节点数为0。每周一的12点至13点开启弹性伸缩。

      1
      2
      3
      4
      5
      6
      7
      8
      9
     10
     11
     12
     13
     14
     15
     16
     17
     18
     19
     20
     21
     22
     23
     24
     25
     26
     27
     28
     29
     30
     31
     32
     33
     34
     35
     36
     37
     38
     39
     40
     41
     42
     43
     44
     45
     46
     47
     48
     49
     50
     51
     52
     53
     54
     55
     56
     57
     58
     59
     60
     61
     62
     63
     64
     65
     66
     67
     68
     69
     70
     71
     72
     73
     74
     75
     76
     77
     78
     79
     80
     81
     82
     83
     84
     85
     86
     87
     88
     89
     90
     91
     92
     93
     94
     95
     96
     97
     98
     99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    159
    160
    161
    162
    163
    164
    165
    166
    package main
    
    import (
    	"fmt"
    	"github.com/huaweicloud/huaweicloud-sdk-go-v3/core/auth/basic"
        mrs "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/mrs/v2"
    	"github.com/huaweicloud/huaweicloud-sdk-go-v3/services/mrs/v2/model"
        region "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/mrs/v2/region"
    )
    
    func main() {
        // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
        // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
        ak := os.Getenv("CLOUD_SDK_AK")
        sk := os.Getenv("CLOUD_SDK_SK")
        projectId := "{project_id}"
    
        auth := basic.NewCredentialsBuilder().
            WithAk(ak).
            WithSk(sk).
            WithProjectId(projectId).
            Build()
    
        client := mrs.NewMrsClient(
            mrs.MrsClientBuilder().
                WithRegion(region.ValueOf("<YOUR REGION>")).
                WithCredential(auth).
                Build())
    
        request := &model.CreateClusterRequest{}
    	comparisonOperatorTrigger:= "LTOE"
    	triggerRules := &model.Trigger{
    		MetricName: "StormSlotAvailablePercentage",
    		MetricValue: "100",
    		ComparisonOperator: &comparisonOperatorTrigger,
    		EvaluationPeriods: int32(1),
    	}
    	descriptionRules:= ""
    	var listRulesAutoScalingPolicy = []model.Rule{
            {
                Name: "default-expand-1",
                Description: &descriptionRules,
                AdjustmentType: model.GetRuleAdjustmentTypeEnum().SCALE_OUT,
                CoolDownMinutes: int32(5),
                ScalingAdjustment: int32(1),
                Trigger: triggerRules,
            },
        }
    	var listEffectiveDaysResourcesPlans = []model.ResourcesPlanEffectiveDays{
            model.GetResourcesPlanEffectiveDaysEnum().MONDAY,
        }
    	var listResourcesPlansAutoScalingPolicy = []model.ResourcesPlan{
            {
                PeriodType: "daily",
                StartTime: "12:00",
                EndTime: "13:00",
                MinCapacity: int32(2),
                MaxCapacity: int32(3),
                EffectiveDays: &listEffectiveDaysResourcesPlans,
            },
        }
    	autoScalingPolicyNodeGroups := &model.AutoScalingPolicy{
    		AutoScalingEnable: true,
    		MinCapacity: int32(0),
    		MaxCapacity: int32(1),
    		ResourcesPlans: &listResourcesPlansAutoScalingPolicy,
    		Rules: &listRulesAutoScalingPolicy,
    	}
    	dataVolumeNodeGroups := &model.Volume{
    		Type: "SAS",
    		Size: int32(600),
    	}
    	rootVolumeNodeGroups := &model.Volume{
    		Type: "SAS",
    		Size: int32(480),
    	}
    	dataVolumeNodeGroups1 := &model.Volume{
    		Type: "SAS",
    		Size: int32(600),
    	}
    	rootVolumeNodeGroups1 := &model.Volume{
    		Type: "SAS",
    		Size: int32(480),
    	}
    	dataVolumeNodeGroups2 := &model.Volume{
    		Type: "SAS",
    		Size: int32(600),
    	}
    	rootVolumeNodeGroups2 := &model.Volume{
    		Type: "SAS",
    		Size: int32(480),
    	}
    	dataVolumeCountNodeGroups:= int32(1)
    	dataVolumeCountNodeGroups1:= int32(1)
    	dataVolumeCountNodeGroups2:= int32(1)
    	var listNodeGroupsbody = []model.NodeGroupV2{
            {
                GroupName: "master_node_default_group",
                NodeNum: int32(2),
                NodeSize: "rc3.4xlarge.4.linux.bigdata",
                RootVolume: rootVolumeNodeGroups2,
                DataVolume: dataVolumeNodeGroups2,
                DataVolumeCount: &dataVolumeCountNodeGroups,
            },
            {
                GroupName: "core_node_streaming_group",
                NodeNum: int32(3),
                NodeSize: "rc3.4xlarge.4.linux.bigdata",
                RootVolume: rootVolumeNodeGroups1,
                DataVolume: dataVolumeNodeGroups1,
                DataVolumeCount: &dataVolumeCountNodeGroups1,
            },
            {
                GroupName: "task_node_streaming_group",
                NodeNum: int32(0),
                NodeSize: "rc3.4xlarge.4.linux.bigdata",
                RootVolume: rootVolumeNodeGroups,
                DataVolume: dataVolumeNodeGroups,
                DataVolumeCount: &dataVolumeCountNodeGroups2,
                AutoScalingPolicy: autoScalingPolicyNodeGroups,
            },
        }
    	var listTagsbody = []model.Tag{
            {
                Key: "tag1",
                Value: "111",
            },
            {
                Key: "tag2",
                Value: "222",
            },
        }
    	chargeInfobody := &model.ChargeInfo{
    		ChargeMode: "postPaid",
    	}
    	logCollectionCreateClusterReqV2:= model.GetCreateClusterReqV2LogCollectionEnum().E_1
    	mrsEcsDefaultAgencyCreateClusterReqV2:= "MRS_ECS_DEFAULT_AGENCY"
    	nodeRootPasswordCreateClusterReqV2:= "your password"
    	subnetIdCreateClusterReqV2:= "1f8c5ca6-1f66-4096-bb00-baf175954f6e"
    	request.Body = &model.CreateClusterReqV2{
    		NodeGroups: listNodeGroupsbody,
    		LogCollection: &logCollectionCreateClusterReqV2,
    		Tags: &listTagsbody,
    		MrsEcsDefaultAgency: &mrsEcsDefaultAgencyCreateClusterReqV2,
    		NodeRootPassword: &nodeRootPasswordCreateClusterReqV2,
    		LoginMode: "PASSWORD",
    		ManagerAdminPassword: "your password",
    		SafeMode: "KERBEROS",
    		AvailabilityZone: "",
    		Components: "Storm,Kafka,Flume,Ranger",
    		SubnetName: "subnet",
    		SubnetId: &subnetIdCreateClusterReqV2,
    		VpcName: "vpc-37cd",
    		Region: "",
    		ChargeInfo: chargeInfobody,
    		ClusterType: "STREAMING",
    		ClusterName: "mrs_Dokle_dm",
    		ClusterVersion: "MRS 3.1.0",
    	}
    	response, err := client.CreateCluster(request)
    	if err == nil {
            fmt.Printf("%+v\n", response)
        } else {
            fmt.Println(err)
        }
    }
    
  • 创建一个混合集群,集群版本号为MRS 3.1.0。其中包含一个Master节点组,节点数为2;两个Core节点组,每个Core节点组的节点数均为3;两个Task节点组,一个Task节点组节点数为1,另一个节点数为0。

      1
      2
      3
      4
      5
      6
      7
      8
      9
     10
     11
     12
     13
     14
     15
     16
     17
     18
     19
     20
     21
     22
     23
     24
     25
     26
     27
     28
     29
     30
     31
     32
     33
     34
     35
     36
     37
     38
     39
     40
     41
     42
     43
     44
     45
     46
     47
     48
     49
     50
     51
     52
     53
     54
     55
     56
     57
     58
     59
     60
     61
     62
     63
     64
     65
     66
     67
     68
     69
     70
     71
     72
     73
     74
     75
     76
     77
     78
     79
     80
     81
     82
     83
     84
     85
     86
     87
     88
     89
     90
     91
     92
     93
     94
     95
     96
     97
     98
     99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    159
    160
    161
    package main
    
    import (
    	"fmt"
    	"github.com/huaweicloud/huaweicloud-sdk-go-v3/core/auth/basic"
        mrs "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/mrs/v2"
    	"github.com/huaweicloud/huaweicloud-sdk-go-v3/services/mrs/v2/model"
        region "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/mrs/v2/region"
    )
    
    func main() {
        // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
        // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
        ak := os.Getenv("CLOUD_SDK_AK")
        sk := os.Getenv("CLOUD_SDK_SK")
        projectId := "{project_id}"
    
        auth := basic.NewCredentialsBuilder().
            WithAk(ak).
            WithSk(sk).
            WithProjectId(projectId).
            Build()
    
        client := mrs.NewMrsClient(
            mrs.MrsClientBuilder().
                WithRegion(region.ValueOf("<YOUR REGION>")).
                WithCredential(auth).
                Build())
    
        request := &model.CreateClusterRequest{}
    	dataVolumeNodeGroups := &model.Volume{
    		Type: "SAS",
    		Size: int32(600),
    	}
    	rootVolumeNodeGroups := &model.Volume{
    		Type: "SAS",
    		Size: int32(480),
    	}
    	dataVolumeNodeGroups1 := &model.Volume{
    		Type: "SAS",
    		Size: int32(600),
    	}
    	rootVolumeNodeGroups1 := &model.Volume{
    		Type: "SAS",
    		Size: int32(480),
    	}
    	dataVolumeNodeGroups2 := &model.Volume{
    		Type: "SAS",
    		Size: int32(600),
    	}
    	rootVolumeNodeGroups2 := &model.Volume{
    		Type: "SAS",
    		Size: int32(480),
    	}
    	dataVolumeNodeGroups3 := &model.Volume{
    		Type: "SAS",
    		Size: int32(600),
    	}
    	rootVolumeNodeGroups3 := &model.Volume{
    		Type: "SAS",
    		Size: int32(480),
    	}
    	dataVolumeNodeGroups4 := &model.Volume{
    		Type: "SAS",
    		Size: int32(600),
    	}
    	rootVolumeNodeGroups4 := &model.Volume{
    		Type: "SAS",
    		Size: int32(480),
    	}
    	dataVolumeCountNodeGroups:= int32(1)
    	dataVolumeCountNodeGroups1:= int32(1)
    	dataVolumeCountNodeGroups2:= int32(1)
    	dataVolumeCountNodeGroups3:= int32(1)
    	dataVolumeCountNodeGroups4:= int32(1)
    	var listNodeGroupsbody = []model.NodeGroupV2{
            {
                GroupName: "master_node_default_group",
                NodeNum: int32(2),
                NodeSize: "Sit3.4xlarge.4.linux.bigdata",
                RootVolume: rootVolumeNodeGroups4,
                DataVolume: dataVolumeNodeGroups4,
                DataVolumeCount: &dataVolumeCountNodeGroups,
            },
            {
                GroupName: "core_node_streaming_group",
                NodeNum: int32(3),
                NodeSize: "Sit3.4xlarge.4.linux.bigdata",
                RootVolume: rootVolumeNodeGroups3,
                DataVolume: dataVolumeNodeGroups3,
                DataVolumeCount: &dataVolumeCountNodeGroups1,
            },
            {
                GroupName: "core_node_analysis_group",
                NodeNum: int32(3),
                NodeSize: "Sit3.4xlarge.4.linux.bigdata",
                RootVolume: rootVolumeNodeGroups2,
                DataVolume: dataVolumeNodeGroups2,
                DataVolumeCount: &dataVolumeCountNodeGroups2,
            },
            {
                GroupName: "task_node_analysis_group",
                NodeNum: int32(1),
                NodeSize: "Sit3.4xlarge.4.linux.bigdata",
                RootVolume: rootVolumeNodeGroups1,
                DataVolume: dataVolumeNodeGroups1,
                DataVolumeCount: &dataVolumeCountNodeGroups3,
            },
            {
                GroupName: "task_node_streaming_group",
                NodeNum: int32(0),
                NodeSize: "Sit3.4xlarge.4.linux.bigdata",
                RootVolume: rootVolumeNodeGroups,
                DataVolume: dataVolumeNodeGroups,
                DataVolumeCount: &dataVolumeCountNodeGroups4,
            },
        }
    	var listTagsbody = []model.Tag{
            {
                Key: "tag1",
                Value: "111",
            },
            {
                Key: "tag2",
                Value: "222",
            },
        }
    	chargeInfobody := &model.ChargeInfo{
    		ChargeMode: "postPaid",
    	}
    	logCollectionCreateClusterReqV2:= model.GetCreateClusterReqV2LogCollectionEnum().E_1
    	mrsEcsDefaultAgencyCreateClusterReqV2:= "MRS_ECS_DEFAULT_AGENCY"
    	nodeRootPasswordCreateClusterReqV2:= "your password"
    	subnetIdCreateClusterReqV2:= "1f8c5ca6-1f66-4096-bb00-baf175954f6e"
    	request.Body = &model.CreateClusterReqV2{
    		NodeGroups: listNodeGroupsbody,
    		LogCollection: &logCollectionCreateClusterReqV2,
    		Tags: &listTagsbody,
    		MrsEcsDefaultAgency: &mrsEcsDefaultAgencyCreateClusterReqV2,
    		NodeRootPassword: &nodeRootPasswordCreateClusterReqV2,
    		LoginMode: "PASSWORD",
    		ManagerAdminPassword: "your password",
    		SafeMode: "KERBEROS",
    		AvailabilityZone: "",
    		Components: "Hadoop,Spark2x,HBase,Hive,Hue,Loader,Kafka,Storm,Flume,Flink,Oozie,Ranger,Tez",
    		SubnetName: "subnet",
    		SubnetId: &subnetIdCreateClusterReqV2,
    		VpcName: "vpc-37cd",
    		Region: "",
    		ChargeInfo: chargeInfobody,
    		ClusterType: "MIXED",
    		ClusterName: "mrs_onmm_dm",
    		ClusterVersion: "MRS 3.1.0",
    	}
    	response, err := client.CreateCluster(request)
    	if err == nil {
            fmt.Printf("%+v\n", response)
        } else {
            fmt.Println(err)
        }
    }
    
  • 创建自定义管控合设集群,集群版本号为MRS 3.1.0。包含一个Master节点组,节点数为3;两个Core节点组,一个节点数为3,另一个节点数为1。

      1
      2
      3
      4
      5
      6
      7
      8
      9
     10
     11
     12
     13
     14
     15
     16
     17
     18
     19
     20
     21
     22
     23
     24
     25
     26
     27
     28
     29
     30
     31
     32
     33
     34
     35
     36
     37
     38
     39
     40
     41
     42
     43
     44
     45
     46
     47
     48
     49
     50
     51
     52
     53
     54
     55
     56
     57
     58
     59
     60
     61
     62
     63
     64
     65
     66
     67
     68
     69
     70
     71
     72
     73
     74
     75
     76
     77
     78
     79
     80
     81
     82
     83
     84
     85
     86
     87
     88
     89
     90
     91
     92
     93
     94
     95
     96
     97
     98
     99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    159
    160
    161
    162
    163
    164
    165
    166
    167
    168
    169
    170
    171
    172
    173
    174
    175
    176
    177
    178
    179
    180
    181
    182
    183
    184
    185
    186
    187
    188
    189
    190
    191
    192
    193
    194
    195
    196
    197
    198
    199
    200
    201
    202
    203
    204
    205
    206
    package main
    
    import (
    	"fmt"
    	"github.com/huaweicloud/huaweicloud-sdk-go-v3/core/auth/basic"
        mrs "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/mrs/v2"
    	"github.com/huaweicloud/huaweicloud-sdk-go-v3/services/mrs/v2/model"
        region "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/mrs/v2/region"
    )
    
    func main() {
        // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
        // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
        ak := os.Getenv("CLOUD_SDK_AK")
        sk := os.Getenv("CLOUD_SDK_SK")
        projectId := "{project_id}"
    
        auth := basic.NewCredentialsBuilder().
            WithAk(ak).
            WithSk(sk).
            WithProjectId(projectId).
            Build()
    
        client := mrs.NewMrsClient(
            mrs.MrsClientBuilder().
                WithRegion(region.ValueOf("<YOUR REGION>")).
                WithCredential(auth).
                Build())
    
        request := &model.CreateClusterRequest{}
    	var listAssignedRolesNodeGroups = []string{
            "NodeManager",
    	    "KerberosClient",
    	    "SlapdClient",
    	    "meta",
    	    "FlinkResource",
        }
    	dataVolumeNodeGroups := &model.Volume{
    		Type: "SAS",
    		Size: int32(600),
    	}
    	rootVolumeNodeGroups := &model.Volume{
    		Type: "SAS",
    		Size: int32(480),
    	}
    	var listAssignedRolesNodeGroups1 = []string{
            "DataNode",
    	    "NodeManager",
    	    "RegionServer",
    	    "Flume:1",
    	    "Broker",
    	    "Supervisor",
    	    "Logviewer",
    	    "HBaseIndexer",
    	    "KerberosClient",
    	    "SlapdClient",
    	    "meta",
    	    "HSBroker:1,2",
    	    "ThriftServer",
    	    "ThriftServer1",
    	    "RESTServer",
    	    "FlinkResource",
        }
    	dataVolumeNodeGroups1 := &model.Volume{
    		Type: "SAS",
    		Size: int32(600),
    	}
    	rootVolumeNodeGroups1 := &model.Volume{
    		Type: "SAS",
    		Size: int32(480),
    	}
    	var listAssignedRolesNodeGroups2 = []string{
            "OMSServer:1,2",
    	    "SlapdServer:1,2",
    	    "KerberosServer:1,2",
    	    "KerberosAdmin:1,2",
    	    "quorumpeer:1,2,3",
    	    "NameNode:2,3",
    	    "Zkfc:2,3",
    	    "JournalNode:1,2,3",
    	    "ResourceManager:2,3",
    	    "JobHistoryServer:2,3",
    	    "DBServer:1,3",
    	    "Hue:1,3",
    	    "LoaderServer:1,3",
    	    "MetaStore:1,2,3",
    	    "WebHCat:1,2,3",
    	    "HiveServer:1,2,3",
    	    "HMaster:2,3",
    	    "MonitorServer:1,2",
    	    "Nimbus:1,2",
    	    "UI:1,2",
    	    "JDBCServer2x:1,2,3",
    	    "JobHistory2x:2,3",
    	    "SparkResource2x:1,2,3",
    	    "oozie:2,3",
    	    "LoadBalancer:2,3",
    	    "TezUI:1,3",
    	    "TimelineServer:3",
    	    "RangerAdmin:1,2",
    	    "UserSync:2",
    	    "TagSync:2",
    	    "KerberosClient",
    	    "SlapdClient",
    	    "meta",
    	    "HSConsole:2,3",
    	    "FlinkResource:1,2,3",
    	    "DataNode:1,2,3",
    	    "NodeManager:1,2,3",
    	    "IndexServer2x:1,2",
    	    "ThriftServer:1,2,3",
    	    "RegionServer:1,2,3",
    	    "ThriftServer1:1,2,3",
    	    "RESTServer:1,2,3",
    	    "Broker:1,2,3",
    	    "Supervisor:1,2,3",
    	    "Logviewer:1,2,3",
    	    "Flume:1,2,3",
    	    "HSBroker:1,2,3",
        }
    	dataVolumeNodeGroups2 := &model.Volume{
    		Type: "SAS",
    		Size: int32(600),
    	}
    	rootVolumeNodeGroups2 := &model.Volume{
    		Type: "SAS",
    		Size: int32(480),
    	}
    	dataVolumeCountNodeGroups:= int32(1)
    	dataVolumeCountNodeGroups1:= int32(1)
    	dataVolumeCountNodeGroups2:= int32(1)
    	var listNodeGroupsbody = []model.NodeGroupV2{
            {
                GroupName: "master_node_default_group",
                NodeNum: int32(3),
                NodeSize: "Sit3.4xlarge.4.linux.bigdata",
                RootVolume: rootVolumeNodeGroups2,
                DataVolume: dataVolumeNodeGroups2,
                DataVolumeCount: &dataVolumeCountNodeGroups,
                AssignedRoles: &listAssignedRolesNodeGroups2,
            },
            {
                GroupName: "node_group_1",
                NodeNum: int32(3),
                NodeSize: "Sit3.4xlarge.4.linux.bigdata",
                RootVolume: rootVolumeNodeGroups1,
                DataVolume: dataVolumeNodeGroups1,
                DataVolumeCount: &dataVolumeCountNodeGroups1,
                AssignedRoles: &listAssignedRolesNodeGroups1,
            },
            {
                GroupName: "node_group_2",
                NodeNum: int32(1),
                NodeSize: "Sit3.4xlarge.4.linux.bigdata",
                RootVolume: rootVolumeNodeGroups,
                DataVolume: dataVolumeNodeGroups,
                DataVolumeCount: &dataVolumeCountNodeGroups2,
                AssignedRoles: &listAssignedRolesNodeGroups,
            },
        }
    	var listTagsbody = []model.Tag{
            {
                Key: "tag1",
                Value: "111",
            },
            {
                Key: "tag2",
                Value: "222",
            },
        }
    	chargeInfobody := &model.ChargeInfo{
    		ChargeMode: "postPaid",
    	}
    	logCollectionCreateClusterReqV2:= model.GetCreateClusterReqV2LogCollectionEnum().E_1
    	templateIdCreateClusterReqV2:= "mgmt_control_combined_v2"
    	mrsEcsDefaultAgencyCreateClusterReqV2:= "MRS_ECS_DEFAULT_AGENCY"
    	nodeRootPasswordCreateClusterReqV2:= "your password"
    	subnetIdCreateClusterReqV2:= "1f8c5ca6-1f66-4096-bb00-baf175954f6e"
    	request.Body = &model.CreateClusterReqV2{
    		NodeGroups: listNodeGroupsbody,
    		LogCollection: &logCollectionCreateClusterReqV2,
    		Tags: &listTagsbody,
    		TemplateId: &templateIdCreateClusterReqV2,
    		MrsEcsDefaultAgency: &mrsEcsDefaultAgencyCreateClusterReqV2,
    		NodeRootPassword: &nodeRootPasswordCreateClusterReqV2,
    		LoginMode: "PASSWORD",
    		ManagerAdminPassword: "your password",
    		SafeMode: "KERBEROS",
    		AvailabilityZone: "",
    		Components: "Hadoop,Spark2x,HBase,Hive,Hue,Loader,Kafka,Storm,Flume,Flink,Oozie,Ranger,Tez",
    		SubnetName: "subnet",
    		SubnetId: &subnetIdCreateClusterReqV2,
    		VpcName: "vpc-37cd",
    		Region: "",
    		ChargeInfo: chargeInfobody,
    		ClusterType: "CUSTOM",
    		ClusterName: "mrs_heshe_dm",
    		ClusterVersion: "MRS 3.1.0",
    	}
    	response, err := client.CreateCluster(request)
    	if err == nil {
            fmt.Printf("%+v\n", response)
        } else {
            fmt.Println(err)
        }
    }
    
  • 创建自定义管控分设集群,集群版本号为MRS 3.1.0。包含一个Master节点组,节点数为5;一个Core节点组,节点数为3。

      1
      2
      3
      4
      5
      6
      7
      8
      9
     10
     11
     12
     13
     14
     15
     16
     17
     18
     19
     20
     21
     22
     23
     24
     25
     26
     27
     28
     29
     30
     31
     32
     33
     34
     35
     36
     37
     38
     39
     40
     41
     42
     43
     44
     45
     46
     47
     48
     49
     50
     51
     52
     53
     54
     55
     56
     57
     58
     59
     60
     61
     62
     63
     64
     65
     66
     67
     68
     69
     70
     71
     72
     73
     74
     75
     76
     77
     78
     79
     80
     81
     82
     83
     84
     85
     86
     87
     88
     89
     90
     91
     92
     93
     94
     95
     96
     97
     98
     99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    159
    160
    161
    162
    163
    164
    165
    166
    167
    168
    169
    170
    171
    172
    173
    174
    175
    176
    177
    178
    179
    180
    181
    package main
    
    import (
    	"fmt"
    	"github.com/huaweicloud/huaweicloud-sdk-go-v3/core/auth/basic"
        mrs "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/mrs/v2"
    	"github.com/huaweicloud/huaweicloud-sdk-go-v3/services/mrs/v2/model"
        region "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/mrs/v2/region"
    )
    
    func main() {
        // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
        // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
        ak := os.Getenv("CLOUD_SDK_AK")
        sk := os.Getenv("CLOUD_SDK_SK")
        projectId := "{project_id}"
    
        auth := basic.NewCredentialsBuilder().
            WithAk(ak).
            WithSk(sk).
            WithProjectId(projectId).
            Build()
    
        client := mrs.NewMrsClient(
            mrs.MrsClientBuilder().
                WithRegion(region.ValueOf("<YOUR REGION>")).
                WithCredential(auth).
                Build())
    
        request := &model.CreateClusterRequest{}
    	var listAssignedRolesNodeGroups = []string{
            "DataNode",
    	    "NodeManager",
    	    "RegionServer",
    	    "Flume:1",
    	    "Broker",
    	    "Supervisor",
    	    "Logviewer",
    	    "HBaseIndexer",
    	    "KerberosClient",
    	    "SlapdClient",
    	    "meta",
    	    "HSBroker:1,2",
    	    "ThriftServer",
    	    "ThriftServer1",
    	    "RESTServer",
    	    "FlinkResource",
        }
    	dataVolumeNodeGroups := &model.Volume{
    		Type: "SAS",
    		Size: int32(600),
    	}
    	rootVolumeNodeGroups := &model.Volume{
    		Type: "SAS",
    		Size: int32(480),
    	}
    	var listAssignedRolesNodeGroups1 = []string{
            "OMSServer:1,2",
    	    "SlapdServer:3,4",
    	    "KerberosServer:3,4",
    	    "KerberosAdmin:3,4",
    	    "quorumpeer:3,4,5",
    	    "NameNode:4,5",
    	    "Zkfc:4,5",
    	    "JournalNode:1,2,3,4,5",
    	    "ResourceManager:4,5",
    	    "JobHistoryServer:4,5",
    	    "DBServer:3,5",
    	    "Hue:1,2",
    	    "LoaderServer:1,2",
    	    "MetaStore:1,2,3,4,5",
    	    "WebHCat:1,2,3,4,5",
    	    "HiveServer:1,2,3,4,5",
    	    "HMaster:4,5",
    	    "MonitorServer:1,2",
    	    "Nimbus:1,2",
    	    "UI:1,2",
    	    "JDBCServer2x:1,2,3,4,5",
    	    "JobHistory2x:4,5",
    	    "SparkResource2x:1,2,3,4,5",
    	    "oozie:1,2",
    	    "LoadBalancer:1,2",
    	    "TezUI:1,2",
    	    "TimelineServer:5",
    	    "RangerAdmin:1,2",
    	    "KerberosClient",
    	    "SlapdClient",
    	    "meta",
    	    "HSConsole:1,2",
    	    "FlinkResource:1,2,3,4,5",
    	    "DataNode:1,2,3,4,5",
    	    "NodeManager:1,2,3,4,5",
    	    "IndexServer2x:1,2",
    	    "ThriftServer:1,2,3,4,5",
    	    "RegionServer:1,2,3,4,5",
    	    "ThriftServer1:1,2,3,4,5",
    	    "RESTServer:1,2,3,4,5",
    	    "Broker:1,2,3,4,5",
    	    "Supervisor:1,2,3,4,5",
    	    "Logviewer:1,2,3,4,5",
    	    "Flume:1,2,3,4,5",
    	    "HBaseIndexer:1,2,3,4,5",
    	    "TagSync:1",
    	    "UserSync:1",
        }
    	dataVolumeNodeGroups1 := &model.Volume{
    		Type: "SAS",
    		Size: int32(600),
    	}
    	rootVolumeNodeGroups1 := &model.Volume{
    		Type: "SAS",
    		Size: int32(480),
    	}
    	dataVolumeCountNodeGroups:= int32(1)
    	dataVolumeCountNodeGroups1:= int32(1)
    	var listNodeGroupsbody = []model.NodeGroupV2{
            {
                GroupName: "master_node_default_group",
                NodeNum: int32(5),
                NodeSize: "rc3.4xlarge.4.linux.bigdata",
                RootVolume: rootVolumeNodeGroups1,
                DataVolume: dataVolumeNodeGroups1,
                DataVolumeCount: &dataVolumeCountNodeGroups,
                AssignedRoles: &listAssignedRolesNodeGroups1,
            },
            {
                GroupName: "node_group_1",
                NodeNum: int32(3),
                NodeSize: "rc3.4xlarge.4.linux.bigdata",
                RootVolume: rootVolumeNodeGroups,
                DataVolume: dataVolumeNodeGroups,
                DataVolumeCount: &dataVolumeCountNodeGroups1,
                AssignedRoles: &listAssignedRolesNodeGroups,
            },
        }
    	var listTagsbody = []model.Tag{
            {
                Key: "aaa",
                Value: "111",
            },
            {
                Key: "bbb",
                Value: "222",
            },
        }
    	chargeInfobody := &model.ChargeInfo{
    		ChargeMode: "postPaid",
    	}
    	logCollectionCreateClusterReqV2:= model.GetCreateClusterReqV2LogCollectionEnum().E_1
    	templateIdCreateClusterReqV2:= "mgmt_control_separated_v2"
    	mrsEcsDefaultAgencyCreateClusterReqV2:= "MRS_ECS_DEFAULT_AGENCY"
    	nodeRootPasswordCreateClusterReqV2:= "your password"
    	subnetIdCreateClusterReqV2:= "1f8c5ca6-1f66-4096-bb00-baf175954f6e"
    	request.Body = &model.CreateClusterReqV2{
    		NodeGroups: listNodeGroupsbody,
    		LogCollection: &logCollectionCreateClusterReqV2,
    		Tags: &listTagsbody,
    		TemplateId: &templateIdCreateClusterReqV2,
    		MrsEcsDefaultAgency: &mrsEcsDefaultAgencyCreateClusterReqV2,
    		NodeRootPassword: &nodeRootPasswordCreateClusterReqV2,
    		LoginMode: "PASSWORD",
    		ManagerAdminPassword: "your password",
    		SafeMode: "KERBEROS",
    		AvailabilityZone: "",
    		Components: "Hadoop,Spark2x,HBase,Hive,Hue,Loader,Kafka,Storm,Flume,Flink,Oozie,Ranger,Tez",
    		SubnetName: "subnet",
    		SubnetId: &subnetIdCreateClusterReqV2,
    		VpcName: "vpc-37cd",
    		Region: "",
    		ChargeInfo: chargeInfobody,
    		ClusterType: "CUSTOM",
    		ClusterName: "mrs_jdRU_dm01",
    		ClusterVersion: "MRS 3.1.0",
    	}
    	response, err := client.CreateCluster(request)
    	if err == nil {
            fmt.Printf("%+v\n", response)
        } else {
            fmt.Println(err)
        }
    }
    
  • 创建自定义数据分设集群,集群版本号为MRS 3.1.0。包含一个Master节点组,节点数为9;四个Core节点组,每个Core节点组的节点数均为3。

      1
      2
      3
      4
      5
      6
      7
      8
      9
     10
     11
     12
     13
     14
     15
     16
     17
     18
     19
     20
     21
     22
     23
     24
     25
     26
     27
     28
     29
     30
     31
     32
     33
     34
     35
     36
     37
     38
     39
     40
     41
     42
     43
     44
     45
     46
     47
     48
     49
     50
     51
     52
     53
     54
     55
     56
     57
     58
     59
     60
     61
     62
     63
     64
     65
     66
     67
     68
     69
     70
     71
     72
     73
     74
     75
     76
     77
     78
     79
     80
     81
     82
     83
     84
     85
     86
     87
     88
     89
     90
     91
     92
     93
     94
     95
     96
     97
     98
     99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    159
    160
    161
    162
    163
    164
    165
    166
    167
    168
    169
    170
    171
    172
    173
    174
    175
    176
    177
    178
    179
    180
    181
    182
    183
    184
    185
    186
    187
    188
    189
    190
    191
    192
    193
    194
    195
    196
    197
    198
    199
    200
    201
    202
    203
    204
    205
    206
    207
    208
    209
    210
    211
    212
    213
    214
    215
    216
    217
    218
    219
    220
    221
    222
    223
    224
    225
    226
    227
    228
    229
    230
    231
    232
    233
    234
    235
    236
    237
    238
    239
    240
    241
    242
    package main
    
    import (
    	"fmt"
    	"github.com/huaweicloud/huaweicloud-sdk-go-v3/core/auth/basic"
        mrs "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/mrs/v2"
    	"github.com/huaweicloud/huaweicloud-sdk-go-v3/services/mrs/v2/model"
        region "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/mrs/v2/region"
    )
    
    func main() {
        // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
        // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
        ak := os.Getenv("CLOUD_SDK_AK")
        sk := os.Getenv("CLOUD_SDK_SK")
        projectId := "{project_id}"
    
        auth := basic.NewCredentialsBuilder().
            WithAk(ak).
            WithSk(sk).
            WithProjectId(projectId).
            Build()
    
        client := mrs.NewMrsClient(
            mrs.MrsClientBuilder().
                WithRegion(region.ValueOf("<YOUR REGION>")).
                WithCredential(auth).
                Build())
    
        request := &model.CreateClusterRequest{}
    	var listAssignedRolesNodeGroups = []string{
            "Broker",
    	    "Supervisor",
    	    "Logviewer",
    	    "KerberosClient",
    	    "SlapdClient",
    	    "meta",
        }
    	dataVolumeNodeGroups := &model.Volume{
    		Type: "SAS",
    		Size: int32(600),
    	}
    	rootVolumeNodeGroups := &model.Volume{
    		Type: "SAS",
    		Size: int32(480),
    	}
    	var listAssignedRolesNodeGroups1 = []string{
            "KerberosClient",
    	    "SlapdClient",
    	    "meta",
        }
    	dataVolumeNodeGroups1 := &model.Volume{
    		Type: "SAS",
    		Size: int32(600),
    	}
    	rootVolumeNodeGroups1 := &model.Volume{
    		Type: "SAS",
    		Size: int32(480),
    	}
    	var listAssignedRolesNodeGroups2 = []string{
            "HBaseIndexer",
    	    "SolrServer[3]",
    	    "EsNode[2]",
    	    "KerberosClient",
    	    "SlapdClient",
    	    "meta",
    	    "SolrServerAdmin:1,2",
        }
    	dataVolumeNodeGroups2 := &model.Volume{
    		Type: "SAS",
    		Size: int32(600),
    	}
    	rootVolumeNodeGroups2 := &model.Volume{
    		Type: "SAS",
    		Size: int32(480),
    	}
    	var listAssignedRolesNodeGroups3 = []string{
            "DataNode",
    	    "NodeManager",
    	    "RegionServer",
    	    "Flume:1",
    	    "GraphServer",
    	    "KerberosClient",
    	    "SlapdClient",
    	    "meta",
    	    "HSBroker:1,2",
        }
    	dataVolumeNodeGroups3 := &model.Volume{
    		Type: "SAS",
    		Size: int32(600),
    	}
    	rootVolumeNodeGroups3 := &model.Volume{
    		Type: "SAS",
    		Size: int32(480),
    	}
    	var listAssignedRolesNodeGroups4 = []string{
            "OMSServer:1,2",
    	    "SlapdServer:5,6",
    	    "KerberosServer:5,6",
    	    "KerberosAdmin:5,6",
    	    "quorumpeer:5,6,7,8,9",
    	    "NameNode:3,4",
    	    "Zkfc:3,4",
    	    "JournalNode:5,6,7",
    	    "ResourceManager:8,9",
    	    "JobHistoryServer:8",
    	    "DBServer:8,9",
    	    "Hue:8,9",
    	    "FlinkResource:3,4",
    	    "LoaderServer:3,5",
    	    "MetaStore:8,9",
    	    "WebHCat:5",
    	    "HiveServer:8,9",
    	    "HMaster:8,9",
    	    "FTP-Server:3,4",
    	    "MonitorServer:3,4",
    	    "Nimbus:8,9",
    	    "UI:8,9",
    	    "JDBCServer2x:8,9",
    	    "JobHistory2x:8,9",
    	    "SparkResource2x:5,6,7",
    	    "oozie:4,5",
    	    "EsMaster:7,8,9",
    	    "LoadBalancer:8,9",
    	    "TezUI:5,6",
    	    "TimelineServer:5",
    	    "RangerAdmin:4,5",
    	    "UserSync:5",
    	    "TagSync:5",
    	    "KerberosClient",
    	    "SlapdClient",
    	    "meta",
    	    "HSBroker:5",
    	    "HSConsole:3,4",
    	    "FlinkResource:3,4",
        }
    	dataVolumeNodeGroups4 := &model.Volume{
    		Type: "SAS",
    		Size: int32(600),
    	}
    	rootVolumeNodeGroups4 := &model.Volume{
    		Type: "SAS",
    		Size: int32(480),
    	}
    	dataVolumeCountNodeGroups:= int32(1)
    	dataVolumeCountNodeGroups1:= int32(1)
    	dataVolumeCountNodeGroups2:= int32(1)
    	dataVolumeCountNodeGroups3:= int32(1)
    	dataVolumeCountNodeGroups4:= int32(1)
    	var listNodeGroupsbody = []model.NodeGroupV2{
            {
                GroupName: "master_node_default_group",
                NodeNum: int32(9),
                NodeSize: "rc3.4xlarge.4.linux.bigdata",
                RootVolume: rootVolumeNodeGroups4,
                DataVolume: dataVolumeNodeGroups4,
                DataVolumeCount: &dataVolumeCountNodeGroups,
                AssignedRoles: &listAssignedRolesNodeGroups4,
            },
            {
                GroupName: "node_group_1",
                NodeNum: int32(3),
                NodeSize: "rc3.4xlarge.4.linux.bigdata",
                RootVolume: rootVolumeNodeGroups3,
                DataVolume: dataVolumeNodeGroups3,
                DataVolumeCount: &dataVolumeCountNodeGroups1,
                AssignedRoles: &listAssignedRolesNodeGroups3,
            },
            {
                GroupName: "node_group_2",
                NodeNum: int32(3),
                NodeSize: "rc3.4xlarge.4.linux.bigdata",
                RootVolume: rootVolumeNodeGroups2,
                DataVolume: dataVolumeNodeGroups2,
                DataVolumeCount: &dataVolumeCountNodeGroups2,
                AssignedRoles: &listAssignedRolesNodeGroups2,
            },
            {
                GroupName: "node_group_3",
                NodeNum: int32(3),
                NodeSize: "rc3.4xlarge.4.linux.bigdata",
                RootVolume: rootVolumeNodeGroups1,
                DataVolume: dataVolumeNodeGroups1,
                DataVolumeCount: &dataVolumeCountNodeGroups3,
                AssignedRoles: &listAssignedRolesNodeGroups1,
            },
            {
                GroupName: "node_group_4",
                NodeNum: int32(3),
                NodeSize: "rc3.4xlarge.4.linux.bigdata",
                RootVolume: rootVolumeNodeGroups,
                DataVolume: dataVolumeNodeGroups,
                DataVolumeCount: &dataVolumeCountNodeGroups4,
                AssignedRoles: &listAssignedRolesNodeGroups,
            },
        }
    	var listTagsbody = []model.Tag{
            {
                Key: "aaa",
                Value: "111",
            },
            {
                Key: "bbb",
                Value: "222",
            },
        }
    	chargeInfobody := &model.ChargeInfo{
    		ChargeMode: "postPaid",
    	}
    	logCollectionCreateClusterReqV2:= model.GetCreateClusterReqV2LogCollectionEnum().E_1
    	templateIdCreateClusterReqV2:= "mgmt_control_data_separated_v2"
    	mrsEcsDefaultAgencyCreateClusterReqV2:= "MRS_ECS_DEFAULT_AGENCY"
    	nodeRootPasswordCreateClusterReqV2:= "your password"
    	subnetIdCreateClusterReqV2:= "1f8c5ca6-1f66-4096-bb00-baf175954f6e"
    	request.Body = &model.CreateClusterReqV2{
    		NodeGroups: listNodeGroupsbody,
    		LogCollection: &logCollectionCreateClusterReqV2,
    		Tags: &listTagsbody,
    		TemplateId: &templateIdCreateClusterReqV2,
    		MrsEcsDefaultAgency: &mrsEcsDefaultAgencyCreateClusterReqV2,
    		NodeRootPassword: &nodeRootPasswordCreateClusterReqV2,
    		LoginMode: "PASSWORD",
    		ManagerAdminPassword: "your password",
    		SafeMode: "KERBEROS",
    		AvailabilityZone: "",
    		Components: "Hadoop,Spark2x,HBase,Hive,Hue,Loader,Kafka,Storm,Flume,Flink,Oozie,Ranger,Tez",
    		SubnetName: "subnet",
    		SubnetId: &subnetIdCreateClusterReqV2,
    		VpcName: "vpc-37cd",
    		Region: "",
    		ChargeInfo: chargeInfobody,
    		ClusterType: "CUSTOM",
    		ClusterName: "mrs_jdRU_dm02",
    		ClusterVersion: "MRS 3.1.0",
    	}
    	response, err := client.CreateCluster(request)
    	if err == nil {
            fmt.Printf("%+v\n", response)
        } else {
            fmt.Println(err)
        }
    }
    

更多编程语言的SDK代码示例,请参见API Explorer的代码示例页签,可生成自动对应的SDK代码示例。

状态码

状态码

描述

200

正常响应示例。

错误码

请参见错误码

相关文档