Updated on 2022-12-14 GMT+08:00

Querying Cluster Details

Function

This API is used to query details about a specified cluster. This API is incompatible with Sahara.

URI

  • Format

    GET /v1.1/{project_id}/cluster_infos/{cluster_id}

  • Parameter description
    Table 1 URI parameters

    Parameter

    Mandatory

    Type

    Description

    project_id

    Yes

    String

    Project ID. For details about how to obtain the project ID, see Obtaining a Project ID.

    cluster_id

    Yes

    String

    Cluster ID For details about how to obtain the value, see Obtaining the MRS Cluster Information.

Request Parameters

Request parameters

None

Response Parameters

Table 2 Response body parameter

Parameter

Type

Description

cluster

Cluster object

Cluster parameters. For details, see Table 3.

Table 3 Response parameters

Parameter

Type

Description

clusterId

String

Cluster ID

clusterName

String

Cluster name

masterNodeNum

String

Number of Master nodes deployed in a cluster

coreNodeNum

String

Number of Core nodes deployed in a cluster

totalNodeNum

String

Total number of nodes deployed in a cluster

clusterState

String

Cluster status. Valid values include:
  • starting: The cluster is being started.
  • running: The cluster is running.
  • terminated: The cluster has been terminated.
  • failed: The cluster fails.
  • abnormal: The cluster is abnormal.
  • terminating: The cluster is being terminated.
  • frozen: The cluster has been frozen.
  • scaling-out: The cluster is being scaled out.
  • scaling-in: The cluster is being scaled in.

createAt

String

Cluster creation time, which is a 10-bit timestamp

updateAt

String

Cluster update time, which is a 10-bit timestamp

dataCenter

String

Cluster work region

vpc

String

VPC name

vpcId

String

VPC ID

hadoopVersion

String

Hadoop version

masterNodeSize

String

Instance specifications of a Master node

coreNodeSize

String

Instance specifications of a Core node

componentList

Array

Component list. For details, see Table 4.

externalIp

String

External IP address

externalAlternateIp

String

Backup external IP address

internalIp

String

Internal IP address

deploymentId

String

Cluster deployment ID

remark

String

Cluster remarks

orderId

String

Cluster creation order ID

azId

String

AZ ID

masterNodeProductId

String

Product ID of a Master node

masterNodeSpecId

String

Specification ID of a Master node

coreNodeProductId

String

Product ID of a Core node

coreNodeSpecId

String

Specification ID of a Core node

azName

String

AZ name

azCode

String

AZ name (en).

availabilityZoneId

String

AZ.

instanceId

String

Instance ID

vnc

String

URI for remotely logging in to an ECS

tenantId

String

Project ID

volumeSize

Integer

Disk storage space

volumeType

String

Disk type.

subnetId

String

Subnet ID

subnetName

String

Subnet name

securityGroupsId

String

Security group ID

slaveSecurityGroupsId

String

Security group ID of a non-Master node. Currently, one MRS cluster uses only one security group. Therefore, this field has been discarded. This field returns the same value as securityGroupsId does for compatibility consideration.

bootstrapscripts

Array

Bootstrap action script information. For more parameter description, see Table 6.

stageDesc

String

Cluster operation progress description.

The cluster installation progress includes:
  • Verifying cluster parameters: Cluster parameters are being verified.
  • Applying for cluster resources: Cluster resources are being applied for.
  • Creating VMs: The VMs are being created.
  • Initializing VMs: The VMs are being initialized.
  • Installing MRS Manager: MRS Manager is being installed.
  • Deploying the cluster: The cluster is being deployed.
  • Cluster installation failed: Failed to install the cluster.
The cluster scale-out progress includes:
  • Preparing for scale-out: Cluster scale-out is being prepared.
  • Creating VMs: The VMs are being created.
  • Initializing VMs: The VMs are being initialized.
  • Adding nodes to the cluster: The nodes are being added to the cluster.
  • Scale-out failed: Failed to scale out the cluster.
The cluster scale-in progress includes:
  • Preparing for scale-in: Cluster scale-in is being prepared.
  • Decommissioning instance: The instance is being decommissioned.
  • Deleting VMs: The VMs are being deleted.
  • Deleting nodes from the cluster: The nodes are being deleted from the cluster.
  • Scale-in failed: Failed to scale in the cluster.

If the cluster installation, scale-out, or scale-in fails, stageDesc will display the failure cause.

isMrsManagerFinish

Boolean

Whether MRS Manager installation is finished during cluster creation.

  • true: MRS Manager installation is finished.
  • false: MRS Manager installation is not finished.

safeMode

Integer

Running mode of an MRS cluster

  • 0: Normal cluster
  • 1: Security cluster

clusterVersion

String

Cluster version

nodePublicCertName

String

Name of the public key file

masterNodeIp

String

IP address of a Master node

privateIpFirst

String

Preferred private IP address

errorInfo

String

Error message

tags

String

Tag information

clusterType

Integer

Cluster type

logCollection

Integer

Whether to collect logs when cluster installation fails

  • 0: Do not collect.
  • 1: Collect.

taskNodeGroups

List<NodeGroup>

List of Task nodes. For more parameter description, see Table 5.

nodeGroups

List<NodeGroup>

List of Master, Core and Task nodes. For more parameter description,

see Table 5.

masterDataVolumeType

String

Data disk storage type of the Master node. Currently, SATA, SAS, and SSD are supported.

masterDataVolumeSize

Integer

Data disk storage space of the Master node. To increase data storage capacity, you can add disks at the same time when creating a cluster.

Value range: 100 GB to 32,000 GB

masterDataVolumeCount

Integer

Number of data disks of the Master node.

The value can be set to 1 only.

coreDataVolumeType

String

Data disk storage type of the Core node. Currently, SATA, SAS, and SSD are supported.

coreDataVolumeSize

Integer

Data disk storage space of the Core node. To increase data storage capacity, you can add disks at the same time when creating a cluster.

Value range: 100 GB to 32,000 GB

coreDataVolumeCount

Integer

Number of data disks of the Core node.

Value range: 1 to 10

scale

String

Node change status. If this parameter is left blank, the cluster nodes are not changed.

Possible values:

  • scaling-out: The cluster is being scaled out.
  • scaling-in: The cluster is being scaled in.
  • scaling-error: The cluster is in the running state and fails to be scaled in or out or the specifications fail to be scaled up for the last time.
  • scaling-up: The Master node specifications are being scaled up.
  • scaling_up_first: The standby Master node specifications are being scaled up.
  • scaled_up_first: The standby Master node specifications have been scaled up successfully.
  • scaled-up-success: The Master node specifications have been scaled up successfully.
Table 4 componentList parameters

Parameter

Type

Description

componentId

String

Component ID

For example, the component_id of Hadoop is MRS 3.1.0_001.

componentName

String

Component name

componentVersion

String

Component version

componentDesc

String

Component description

Table 5 NodeGroup parameters

Parameter

Type

Description

GroupName

String

Node group name.

NodeNum

Integer

Number of nodes. The value ranges from 0 to 500. The minimum number of Master and Core nodes is 1 and the total number of Core and Task nodes cannot exceed 500.

NodeSize

String

Instance specifications of a node.

NodeSpecId

String

Instance specification ID of a node

NodeProductId

String

Instance product ID of a node

VmProductId

String

VM product ID of a node

VmSpecCode

String

VM specifications of a node

RootVolumeSize

Integer

System disk size of a node. This parameter is not configurable and its default value is 40 GB.

RootVolumeProductId

String

System disk product ID of a node

RootVolumeType

String

System disk type of a node

RootVolumeResourceSpecCode

String

System disk product specifications of a node

RootVolumeResourceType

String

System disk product type of a node

DataVolumeType

String

Data disk storage type of a node. Currently, SATA, SAS, and SSD are supported.

  • SATA: Common I/O
  • SAS: High I/O
  • SSD: Ultra-high I/O

DataVolumeCount

Integer

Number of data disks of a node.

DataVolumeSize

Integer

Data disk storage space of a node.

DataVolumeProductId

String

Data disk product ID of a node

DataVolumeResourceSpecCode

String

Data disk product specifications of a node

DataVolumeResourceType

String

Data disk product type of a node

Table 6 bootstrapscripts parameters

Parameter

Type

Description

name

String

Name of a bootstrap action script. It must be unique in a cluster.

The value can contain only digits, letters, spaces, hyphens (-), and underscores (_) and cannot start with a space.

The value can contain 1 to 64 characters.

uri

String

Path of the shell script. Set this parameter to an OBS bucket path or a local VM path.

  • OBS bucket path: Enter a script path manually. For example, enter the path of the public sample script provided by MRS. Example: s3a://bootstrap/presto/presto-install.sh. If dualroles is installed, the parameter of the presto-install.sh script is dualroles. If worker is installed, the parameter of the presto-install.sh script is worker. Based on the Presto usage habit, you are advised to install dualroles on the active Master nodes and worker on the Core nodes.
  • Local VM path: Enter a script path. The script path must start with a slash (/) and end with .sh.

parameters

String

Bootstrap action script parameters

nodes

Array String

Type of a node where the bootstrap action script is executed. The value can be Master, Core, or Task.

active_master

Boolean

Whether the bootstrap action script runs only on active Master nodes.

The default value is false, indicating that the bootstrap action script can run on all Master nodes.

before_component_start

Boolean

Time when the bootstrap action script is executed. Currently, the following two options are available: Before component start and After component start

The default value is false, indicating that the bootstrap action script is executed after the component is started.

fail_action

String

Whether to continue executing subsequent scripts and creating a cluster after the bootstrap action script fails to be executed.

  • continue: Continue to execute subsequent scripts.
  • errorout: Stop the action.
The default value is errorout, indicating that the action is stopped.
NOTE:

You are advised to set this parameter to continue in the commissioning phase so that the cluster can continue to be installed and started no matter whether the bootstrap action is successful.

action_stages

Array of strings

Select the time when the bootstrap action script is executed.

  • BEFORE_COMPONENT_FIRST_START: before initial component starts
  • AFTER_COMPONENT_FIRST_START: after initial component starts
  • BEFORE_SCALE_IN: before scale-in
  • AFTER_SCALE_IN: after scale-in
  • BEFORE_SCALE_OUT: before scale-out
  • AFTER_SCALE_OUT: after scale-out

Example

  • Example request
    GET /v1.1/{project_id}/cluster_infos/{cluster_id}
  • Example response
    {
        "cluster":{
            "clusterId":"bdb064ff-2855-4624-90d5-e9a6376abd6e",
            "clusterName":"c17022001",
            "masterNodeNum":"2",
            "coreNodeNum":"3",
            "clusterState":"scaling-in",
            "stageDesc": null,
            "createAt":"1487570757",
            "updateAt":"1487668974",
            "billingType":"Metered",
            "dataCenter":,
            "vpc": "vpc-autotest",        
            "vpcId": "e2978efd-ca12-4058-9332-1ca0bfbab592",        
            "duration":"0",
            "fee":"0",
            "hadoopVersion":"",
            "masterNodeSize":"",
            "coreNodeSize":"",
              "componentList": [{
    			"id": null,
    			"componentId": "MRS 3.1.0_001",
    			"componentName": "Hadoop",
    			"componentVersion": "3.1.1",
    			"external_datasources": null,
    			"componentDesc": "A distributed data processing framework for big data sets",
    			"componentDescEn": null
    		},
    		{
    			"id": null,
    			"componentId": "MRS 3.1.0_002",
    			"componentName": "HBase",
    			"componentVersion": "2.2.3",
    			"external_datasources": null,
    			"componentDesc": "HBase is a column-based distributed storage system that features high reliability, performance, and scalability",
    			"componentDescEn": null
    		},
    		{
    			"id": null,
    			"componentId": "MRS 3.1.0_003",
    			"componentName": "Hive",
    			"componentVersion": "3.1.0",
    			"external_datasources": null,
    			"componentDesc": "A data warehouse software that facilitates query and management of big data sets stored in distributed storage systems"
    			"componentDescEn": null
    		},
    		{
    			"id": null,
    			"componentId": "MRS 3.1.0_004",
    			"componentName": "Spark2x",
    			"componentVersion": "2.4.5",
    			"external_datasources": null,
    			"componentDesc": "Spark2x is a fast general-purpose engine for large-scale data processing. It is developed based on the open-source Spark2.x version.",
    			"componentDescEn": null
    		},
    		{
    			"id": null,
    			"componentId": "MRS 3.1.0_005",
    			"componentName": "Tez",
    			"componentVersion": "0.9.2",
    			"external_datasources": null,
    			"componentDesc": "An application framework which allows for a complex directed-acyclic-graph of tasks for processing data.",
    			"componentDescEn": null
    		},
    		{
    			"id": null,
    			"componentId": "MRS 3.1.0_006",
    			"componentName": "Flink",
    			"componentVersion": "1.12.0",
    			"external_datasources": null,
    			"componentDesc": "Flink is an open-source message processing system that integrates streams in batches.",
    			"componentDescEn": null
    		},
    		{
    			"id": null,
    			"componentId": "MRS 3.1.0_008",
    			"componentName": "Kafka",
    			"componentVersion": "2.11-2.4.0",
    			"external_datasources": null,
    			"componentDesc": "Kafka is a distributed message release and subscription system.",
    			"componentDescEn": null
    		},
    		{
    			"id": null,
    			"componentId": "MRS 3.1.0_009",
    			"componentName": "Flume",
    			"componentVersion": "1.9.0",
    			"external_datasources": null,
    			"componentDesc": "Flume is a distributed, reliable, and highly available service for efficiently collecting, aggregating, and moving large amounts of log data",
    			"componentDescEn": null
    		},
    		{
    			"id": null,
    			"componentId": "MRS 3.1.0_013",
    			"componentName": "Loader",
    			"componentVersion": "1.99.3",
    			"external_datasources": null,
    			"componentDesc": "Loader is a tool designed for efficiently transmitting a large amount of data between Apache Hadoop and structured databases (such as relational databases).",
    			"componentDescEn": null
    		},
    		{
    			"id": null,
    			"componentId": "MRS 3.1.0_014",
    			"componentName": "Hue",
    			"componentVersion": "4.7.0",
    			"external_datasources": null,
    			"componentDesc": "Apache Hadoop UI",
    			"componentDescEn": null
    		},
    		{
    			"id": null,
    			"componentId": "MRS 3.1.0_015",
    			"componentName": "Oozie",
    			"componentVersion": "5.1.0",
    			"external_datasources": null,
    			"componentDesc": "A Hadoop job scheduling system",
    			"componentDescEn": null
    		},
    		{
    			"id": null,
    			"componentId": "MRS 3.1.0_022",
    			"componentName": "Ranger",
    			"componentVersion": "2.0.0",
    			"external_datasources": null,
    			"componentDesc": "Ranger is a centralized framework based on the Hadoop platform. It provides permission control interfaces such as monitoring, operation, and management interfaces for complex data.",
    			"componentDescEn": null
    		}],
            "externalIp":"100.XXX.XXX.XXX",
            "externalAlternateIp":"100.XXX.XXX.XXX",
            "internalIp":"192.XXX.XXX.XXX",
            "deploymentId":"4ac46ca7-a488-4b91-82c2-e4d7aa9c40c2",
            "remark":"",
            "orderId":"null",
            "azId":"null",
            "masterNodeProductId":"b35cf2d2348a445ca74b32289a160882",
            "masterNodeSpecId":"8ab05e503b4c42abb304e2489560063b",
            "coreNodeProductId":"dc970349d128460e960a0c2b826c427c",
            "coreNodeSpecId":"cdc6035a249a40249312f5ef72a23cd7",
            "azName":,
            "instanceId":"4ac46ca7-a488-4b91-82c2-e4d7aa9c40c2",
            "vnc":null,
            "tenantId":"3f99e3319a8943ceb15c584f3325d064",
            "volumeSize":600,
            "volumeType":"SATA",
            "subnetId": "6b96eec3-4f8d-4c83-93e2-6ec625001d7c",
            "subnetName":"subnet-ftest",
            "securityGroupsId":"930e34e2-195d-401f-af07-0b64ea6603f8",
            "slaveSecurityGroupsId":"2ef3343e-3477-4a0d-80fe-4d874e4f81b8",
            "stageDesc": "Installing MRS Manager",
            "mrsManagerFinish": false, 
            "safeMode":1,
            "clusterVersion":"MRS 3.1.0",
            "nodePublicCertName":"myp",
            "masterNodeIp":"192.XXX.XXX.XXX",
            "privateIpFirst":"192.XXX.XXX.XXX",
            "errorInfo":null,
            "tags":"k1=v1,k2=v2,k3=v3",
            "clusterType": 0,
            "logCollection": 1,
             "nodeGroups": [ 
                { 
                     "groupName": "master_node_default_group", 
                     "nodeNum": 1, 
                     "nodeSize": "", 
                     "nodeSpecId": "cdc6035a249a40249312f5ef72a23cd7", 
                     "vmProductId": "", 
                     "vmSpecCode": null, 
                     "nodeProductId": "dc970349d128460e960a0c2b826c427c", 
                     "rootVolumeSize": 480, 
                     "rootVolumeProductId": "16c1dcf0897249758b1ec276d06e0572", 
                     "rootVolumeType": "SATA", 
                     "rootVolumeResourceSpecCode": "", 
                     "rootVolumeResourceType": "", 
                     "dataVolumeType": "SATA", 
                     "dataVolumeCount": 1, 
                     "dataVolumeSize": 600, 
                     "dataVolumeProductId": "16c1dcf0897249758b1ec276d06e0572", 
                     "dataVolumeResourceSpecCode": "", 
                     "dataVolumeResourceType": ""
                   },
                   { 
                     "groupName": "core_node_analysis_group", 
                     "nodeNum": 1, 
                     "nodeSize": "", 
                     "nodeSpecId": "cdc6035a249a40249312f5ef72a23cd7", 
                     "vmProductId": "", 
                     "vmSpecCode": null, 
                     "nodeProductId": "dc970349d128460e960a0c2b826c427c", 
                     "rootVolumeSize": 480, 
                     "rootVolumeProductId": "16c1dcf0897249758b1ec276d06e0572", 
                     "rootVolumeType": "SATA", 
                     "rootVolumeResourceSpecCode": "", 
                     "rootVolumeResourceType": "", 
                     "dataVolumeType": "SATA", 
                     "dataVolumeCount": 1, 
                     "dataVolumeSize": 600, 
                     "dataVolumeProductId": "16c1dcf0897249758b1ec276d06e0572", 
                     "dataVolumeResourceSpecCode": "", 
                     "dataVolumeResourceType": ""
                   },
                   { 
                     "groupName": "task_node_analysis_group", 
                     "nodeNum": 1, 
                     "nodeSize": "", 
                     "nodeSpecId": "cdc6035a249a40249312f5ef72a23cd7", 
                     "vmProductId": "", 
                     "vmSpecCode": null, 
                     "nodeProductId": "dc970349d128460e960a0c2b826c427c", 
                     "rootVolumeSize": 480, 
                     "rootVolumeProductId": "16c1dcf0897249758b1ec276d06e0572", 
                     "rootVolumeType": "SATA", 
                     "rootVolumeResourceSpecCode": "", 
                     "rootVolumeResourceType": "", 
                     "dataVolumeType": "SATA", 
                     "dataVolumeCount": 1, 
                     "dataVolumeSize": 600, 
                     "dataVolumeProductId": "16c1dcf0897249758b1ec276d06e0572", 
                     "dataVolumeResourceSpecCode": "", 
                     "dataVolumeResourceType": ""
                   } 
    
                ],
            "taskNodeGroups": [
                {
                   "groupName": "task_node_default_group",
                   "nodeNum": 1,
                   "nodeSize": "",
                   "nodeSpecId": "cdc6035a249a40249312f5ef72a23cd7",
                   "vmProductId": "",
                   "vmSpecCode": null,
                   "nodeProductId": "dc970349d128460e960a0c2b826c427c",
                   "rootVolumeSize": 480,
                   "rootVolumeProductId": "16c1dcf0897249758b1ec276d06e0572",
                   "rootVolumeType": "SATA",
                   "rootVolumeResourceSpecCode": "",
                   "rootVolumeResourceType": "",
                   "dataVolumeType": "SATA",
                   "dataVolumeCount": 1,
                   "dataVolumeSize": 600,
                   "dataVolumeProductId": "16c1dcf0897249758b1ec276d06e0572",
                   "dataVolumeResourceSpecCode": "",
                   "dataVolumeResourceType": "",
                   "AutoScalingPolicy": null
                   }
                ],
             "masterDataVolumeType": "SATA",
             "masterDataVolumeSize": 600,
             "masterDataVolumeCount": 1,
             "coreDataVolumeType": "SATA",
             "coreDataVolumeSize": 600,
             "coreDataVolumeCount": 1,
          }
      }

Status Codes

Table 7 describes the status code.

Table 7 Status code

Status Code

Description

200

Cluster details have been queried.

See Status Codes.

Error Codes

See Error Codes.