Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive

Querying a Cluster List

Updated on 2022-08-12 GMT+08:00

Function

This API is used to query a list of clusters created by a user. This API is incompatible with Sahara.

URI

  • Format

    GET /v1.1/{project_id}/cluster_infos?pageSize={page_size}&currentPage={current_page}&clusterState={cluster_state}&tags={tags}

  • Parameter description
    Table 1 URI parameter description

    Parameter

    Mandatory

    Description

    project_id

    Yes

    Project ID. For details on how to obtain the project ID, see Obtaining a Project ID.

    pageSize

    No

    Maximum number of clusters displayed on a page

    Value range: 1 to 2147483646

    currentPage

    No

    Current page number

    clusterState

    No

    You can query a cluster list by cluster status.

    • starting: Query a list of clusters that are being started.
    • running: Query a list of running clusters.
    • terminated: Query a list of terminated clusters.
    • failed: Query a list of failed clusters.
    • abnormal: Query a list of abnormal clusters.
    • terminating: Query a list of clusters that are being terminated.
    • frozen: Query a list of frozen clusters.
    • scaling-out: Query a list of clusters that are being scaled out.
    • scaling-in: Query a list of clusters that are being scaled in.

    tags

    No

    You can search for a cluster by its tag. If you specify multiple tags, the relationship between them is AND.

    • The format of the tags parameter is tags=k1*v1,k2*v2,k3*v3.
    • When the values of some tags are null, the format is tags=k1,k2,k3*v3.

Request

None.

Response

Table 2 Response parameter description

Parameter

Type

Description

clusterTotal

String

Total number of clusters in a list

clusters

Array

Cluster parameters. For details, see Table 3.

Table 3 clusters parameter description

Parameter

Type

Description

clusterId

String

Cluster ID.

clusterName

String

Cluster name.

masterNodeNum

String

Number of Master nodes deployed in a cluster.

coreNodeNum

String

Number of Core nodes deployed in a cluster.

totalNodeNum

String

Total number of nodes deployed in a cluster.

clusterState

String

Cluster status. Valid values include:
  • starting: The cluster is being started.
  • running: The cluster is running.
  • terminated: The cluster has been terminated.
  • failed: The cluster fails.
  • abnormal: The cluster is abnormal.
  • terminating: The cluster is being terminated.
  • frozen: The cluster has been frozen.
  • scaling-out: The cluster is being scaled out.
  • scaling-in: The cluster is being scaled in.

createAt

String

Cluster creation time, which is a 10-bit timestamp.

updateAt

String

Cluster update time, which is a 10-bit timestamp.

billingType

String

Cluster billing mode.

dataCenter

String

Cluster work region.

vpc

String

VPC name.

vpcId

String

VPC ID.

fee

String

Cluster creation fee, which is automatically calculated.

hadoopVersion

String

Hadoop version.

masterNodeSize

String

Instance specifications of a Master node.

coreNodeSize

String

Instance specifications of a Core node.

componentList

Array

Component list. For details, see Table 4.

externalIp

String

External IP address.

externalAlternateIp

String

Backup external IP address.

internalIp

String

Internal IP address.

deploymentId

String

Cluster deployment ID.

remark

String

Cluster remarks.

orderId

String

Cluster creation order ID.

azId

String

AZ ID.

masterNodeProductId

String

Product ID of a Master node.

masterNodeSpecId

String

Specification ID of a Master node.

coreNodeProductId

String

Product ID of a Core node.

coreNodeSpecId

String

Specification ID of a Core node.

azName

String

AZ name.

instanceId

String

Instance ID.

vnc

String

URI for remotely logging in to an ECS.

tenantId

String

Project ID.

volumeSize

Integer

Disk storage space.

volumeType

String

Disk type.

subnetId

String

Subnet ID.

clusterType

String

Cluster type.

subnetName

String

Subnet name.

securityGroupsId

String

Security group ID.

slaveSecurityGroupsId

String

Security group ID of a non-Master node. Currently, one MRS cluster uses only one security group. Therefore, this field has been discarded.

stageDesc

String

Cluster operation progress description.

The cluster installation progress includes:
  • Verifying cluster parameters: Cluster parameters are being verified.
  • Applying for cluster resources: Cluster resources are being applied for.
  • Creating VMs: The VMs are being created.
  • Initializing VMs: The VMs are being initialized.
  • Installing MRS Manager: MRS Manager is being installed.
  • Deploying the cluster: The cluster is being deployed.
  • Cluster installation failed: Failed to install the cluster.
The cluster scale-out progress includes:
  • Preparing for scale-out: Cluster scale-out is being prepared.
  • Creating VMs: The VMs are being created.
  • Initializing VMs: The VMs are being initialized.
  • Adding nodes to the cluster: The nodes are being added to the cluster.
  • Scale-out failed: Failed to scale out the cluster.
The cluster scale-in progress includes:
  • Preparing for scale-in: Cluster scale-in is being prepared.
  • Decommissioning instance: The instance is being decommissioned.
  • Deleting VMs: The VMs are being deleted.
  • Deleting nodes from the cluster: The nodes are being deleted from the cluster.
  • Scale-in failed: Failed to scale in the cluster.

If the cluster installation, scale-out, or scale-in fails, stageDesc will display the failure cause. For details, see Table 8.

mrsManagerFinish

boolean

Whether MRS Manager installation is finished during cluster creation.

  • true: MRS Manager installation is finished.
  • false: MRS Manager installation is not finished.

safeMode

Integer

Running mode of an MRS cluster.

  • 0: Normal cluster
  • 1: Security cluster

clusterVersion

String

Cluster version.

nodePublicCertName

String

Name of the key file.

masterNodeIp

String

IP address of a Master node.

privateIpFirst

String

Preferred private IP address.

errorInfo

String

Error message.

chargingStartTime

String

Start time of billing.

logCollection

Integer

Whether to collect logs when cluster installation fails.

  • 0: Do not collect.
  • 1: Collect.

taskNodeGroups

List<NodeGroup>

List of Task nodes. For more parameter description, see Table 5.

nodeGroups

List<NodeGroup>

List of Master, Core and Task nodes. For more parameter description, see Table 5.

masterDataVolumeType

String

Data disk storage type of the Master node. Currently, SATA, SAS, and SSD are supported.

masterDataVolumeSize

Integer

Data disk storage space of the Master node To increase data storage capacity, you can add disks at the same time when creating a cluster.

Value range: 100 GB to 32,000 GB

masterDataVolumeCount

Integer

Number of data disks of the Master node

The value can be set to 1 only.

coreDataVolumeType

String

Data disk storage type of the Core node. Currently, SATA, SAS, and SSD are supported.

coreDataVolumeSize

Integer

Data disk storage space of the Core node. To increase data storage capacity, you can add disks at the same time when creating a cluster.

Value range: 100 GB to 32,000 GB

coreDataVolumeCount

Integer

Number of data disks of the Core node.

Value range: 1 to 10

Table 4 componentList parameter description

Parameter

Type

Description

componentId

String

Component ID

For example, the component_id of Hadoop is MRS 3.1.0_001.

For example, component_id of Hadoop is MRS 2.1.1_001.

componentName

String

Component name

componentVersion

String

Component version

componentDesc

String

Component description

Table 5 NodeGroup parameter description

Parameter

Type

Description

groupName

String

Node group name.

nodeNum

Integer

Number of nodes. The value ranges from 0 to 500. The minimum number of Master and Core nodes is 1 and the total number of Core and Task nodes cannot exceed 500.

nodeSize

String

Instance specifications of a node.

nodeSpecId

String

Instance specification ID of a node

nodeProductId

String

Instance product ID of a node

vmProductId

String

VM product ID of a node

vmSpecCode

String

VM specifications of a node

rootVolumeSize

Integer

System disk size of a node. This parameter is not configurable and its default value is 40 GB.

rootVolumeProductId

String

System disk product ID of a node

rootVolumeType

String

System disk type of a node

rootVolumeResourceSpecCode

String

System disk product specifications of a node

rootVolumeResourceType

String

System disk product type of a node

dataVolumeType

String

Data disk storage type of a node. Currently, SATA, SAS, and SSD are supported.

  • SATA: Common I/O
  • SAS: High I/O
  • SSD: Ultra-high I/O

dataVolumeCount

Integer

Number of data disks of a node.

dataVolumeSize

Integer

Data disk storage space of a node.

dataVolumeProductId

String

Data disk product ID of a node

dataVolumeResourceSpecCode

String

Data disk product specifications of a node

dataVolumeResourceType

String

Data disk product type of a node

Example

  • Example request

    None.

  • Example response
    {
        "clusterTotal": 1,
        "clusters": [
            {
                "clusterId": "bc134369-294c-42b7-a707-b2036ba38524",
                "clusterName": "mrs_D0zW",
                "masterNodeNum": "2",
                "coreNodeNum": "3",
                "clusterState": "terminated",
                "createAt": "1498272043",
                "updateAt": "1498636753",
                "chargingStartTime": "1498273733",
                "logCollection": 1,
                "billingType": "Metered",
                "dataCenter": "my-kualalumpur-1",
                "vpc": null,
                "duration": "0",
                "fee": null,
                "hadoopVersion": null,
                "masterNodeSize": null,
                "coreNodeSize": null,
                "componentList": [{
    			"id": null,
    			"componentId": "MRS 3.1.0_001",
    			"componentName": "Hadoop",
    			"componentVersion": "3.1.1",
    			"external_datasources": null,
    			"componentDesc": "A distributed data processing framework for big data sets",
    			"componentDescEn": null
    		},
    		{
    			"id": null,
    			"componentId": "MRS 3.1.0_002",
    			"componentName": "HBase",
    			"componentVersion": "2.2.3",
    			"external_datasources": null,
    			"componentDesc": "HBase is a column-based distributed storage system that features high reliability, performance, and scalability",
    			"componentDescEn": null
    		},
    		{
    			"id": null,
    			"componentId": "MRS 3.1.0_003",
    			"componentName": "Hive",
    			"componentVersion": "3.1.0",
    			"external_datasources": null,
    			"componentDesc": "A data warehouse software that facilitates query and management of big data sets stored in distributed storage systems"
    			"componentDescEn": null
    		},
    		{
    			"id": null,
    			"componentId": "MRS 3.1.0_004",
    			"componentName": "Spark2x",
    			"componentVersion": "2.4.5",
    			"external_datasources": null,
    			"componentDesc": "Spark2x is a fast general-purpose engine for large-scale data processing. It is developed based on the open-source Spark2.x version.",
    			"componentDescEn": null
    		},
    		{
    			"id": null,
    			"componentId": "MRS 3.1.0_005",
    			"componentName": "Tez",
    			"componentVersion": "0.9.2",
    			"external_datasources": null,
    			"componentDesc": "An application framework which allows for a complex directed-acyclic-graph of tasks for processing data.",
    			"componentDescEn": null
    		},
    		{
    			"id": null,
    			"componentId": "MRS 3.1.0_006",
    			"componentName": "Flink",
    			"componentVersion": "1.12.0",
    			"external_datasources": null,
    			"componentDesc": "Flink is an open-source message processing system that integrates streams in batches.",
    			"componentDescEn": null
    		},
    		{
    			"id": null,
    			"componentId": "MRS 3.1.0_008",
    			"componentName": "Kafka",
    			"componentVersion": "2.11-2.4.0",
    			"external_datasources": null,
    			"componentDesc": "Kafka is a distributed message release and subscription system.",
    			"componentDescEn": null
    		},
    		{
    			"id": null,
    			"componentId": "MRS 3.1.0_009",
    			"componentName": "Flume",
    			"componentVersion": "1.9.0",
    			"external_datasources": null,
    			"componentDesc": "Flume is a distributed, reliable, and highly available service for efficiently collecting, aggregating, and moving large amounts of log data",
    			"componentDescEn": null
    		},
    		{
    			"id": null,
    			"componentId": "MRS 3.1.0_014",
    			"componentName": "Hue",
    			"componentVersion": "4.7.0",
    			"external_datasources": null,
    			"componentDesc": "Apache Hadoop UI",
    			"componentDescEn": null
    		},
    		{
    			"id": null,
    			"componentId": "MRS 3.1.0_015",
    			"componentName": "Oozie",
    			"componentVersion": "5.1.0",
    			"external_datasources": null,
    			"componentDesc": "A Hadoop job scheduling system",
    			"componentDescEn": null
    		},
    		{
    			"id": null,
    			"componentId": "MRS 3.1.0_022",
    			"componentName": "Ranger",
    			"componentVersion": "2.0.0",
    			"external_datasources": null,
    			"componentDesc": "Ranger is a centralized framework based on the Hadoop platform. It provides permission control interfaces such as monitoring, operation, and management interfaces for complex data.",
    			"componentDescEn": null
    		}],
                "externalIp": null,
                "externalAlternateIp": null,
                "internalIp": null,
                "deploymentId": null,
                "remark": "",
                "orderId": null,
                "azId": null,
                "masterNodeProductId": null,
                "masterNodeSpecId": null,
                "coreNodeProductId": null,
                "coreNodeSpecId": null,
                "azName": "my-kualalumpur-1a",
                "instanceId": null,
                "vnc": "v2/5a3314075bfa49b9ae360f4ecd333695/servers/e2cda891-232e-4703-995e-3b1406add01d/action",
                "tenantId": null,
                "volumeSize": 0,
                "volumeType": null,
                "subnetId": null,
                "subnetName": null,
                "securityGroupsId": null,
                "slaveSecurityGroupsId": null,
                "mrsManagerFinish": false,
                "stageDesc": "Installing MRS Manager",
                "safeMode": 0,
                "clusterVersion": null,
                "nodePublicCertName": null,
                "masterNodeIp": "unknown",
                "privateIpFirst": null,
                "errorInfo": "",
                "clusterType": 0,
                "nodeGroups": [
                   {
                     "groupName": "master_node_default_group",
                     "nodeNum": 1,
                     "nodeSize": "s3.xlarge.2.linux.bigdata",
                     "nodeSpecId": "cdc6035a249a40249312f5ef72a23cd7",
                     "vmProductId": "",
                     "vmSpecCode": null,
                     "nodeProductId": "dc970349d128460e960a0c2b826c427c",
                     "rootVolumeSize": 480,
                     "rootVolumeProductId": "16c1dcf0897249758b1ec276d06e0572",
                     "rootVolumeType": "SATA",
                     "rootVolumeResourceSpecCode": "",
                     "rootVolumeResourceType": "",
                     "dataVolumeType": "SATA",
                     "dataVolumeCount": 1,
                     "dataVolumeSize": 600,
                     "dataVolumeProductId": "16c1dcf0897249758b1ec276d06e0572",
                     "dataVolumeResourceSpecCode": "",
                     "dataVolumeResourceType": "",
                   },
                   {
                     "groupName": "core_node_analysis_group",
                     "nodeNum": 1,
                     "nodeSize": "s3.xlarge.2.linux.bigdata",
                     "nodeSpecId": "cdc6035a249a40249312f5ef72a23cd7",
                     "vmProductId": "",
                     "vmSpecCode": null,
                     "nodeProductId": "dc970349d128460e960a0c2b826c427c",
                     "rootVolumeSize": 480,
                     "rootVolumeProductId": "16c1dcf0897249758b1ec276d06e0572",
                     "rootVolumeType": "SATA",
                     "rootVolumeResourceSpecCode": "",
                     "rootVolumeResourceType": "",
                     "dataVolumeType": "SATA",
                     "dataVolumeCount": 1,
                     "dataVolumeSize": 600,
                     "dataVolumeProductId": "16c1dcf0897249758b1ec276d06e0572",
                     "dataVolumeResourceSpecCode": "",
                     "dataVolumeResourceType": "",
                   },
                   {
                     "groupName": "task_node_analysis_group",
                     "nodeNum": 1,
                     "nodeSize": "s3.xlarge.2.linux.bigdata",
                     "nodeSpecId": "cdc6035a249a40249312f5ef72a23cd7",
                     "vmProductId": "",
                     "vmSpecCode": null,
                     "nodeProductId": "dc970349d128460e960a0c2b826c427c", 
                     "rootVolumeSize": 480, 
                     "rootVolumeProductId": "16c1dcf0897249758b1ec276d06e0572", 
                     "rootVolumeType": "SATA", 
                     "rootVolumeResourceSpecCode": "", 
                     "rootVolumeResourceType": "", 
                     "dataVolumeType": "SATA", 
                     "dataVolumeCount": 1, 
                     "dataVolumeSize": 600, 
                     "dataVolumeProductId": "16c1dcf0897249758b1ec276d06e0572", 
                     "dataVolumeResourceSpecCode": "", 
                     "dataVolumeResourceType": "", 
                   } 
    
                ],
                "taskNodeGroups": [
                   {
                     "groupName": "task_node_default_group",
                     "nodeNum": 1,
                     "nodeSize": "s3.xlarge.2.linux.bigdata",
                     "nodeSpecId": "cdc6035a249a40249312f5ef72a23cd7",
                     "vmProductId": "",
                     "vmSpecCode": null,
                     "nodeProductId": "dc970349d128460e960a0c2b826c427c",
                     "rootVolumeSize": 480,
                     "rootVolumeProductId": "16c1dcf0897249758b1ec276d06e0572",
                     "rootVolumeType": "SATA",
                     "rootVolumeResourceSpecCode": "",
                     "rootVolumeResourceType": "",
                     "dataVolumeType": "SATA",
                     "dataVolumeCount": 1,
                     "dataVolumeSize": 600,
                     "dataVolumeProductId": "16c1dcf0897249758b1ec276d06e0572",
                     "dataVolumeResourceSpecCode": "",
                     "dataVolumeResourceType": "",
                   }
                ],
             "masterDataVolumeType": "SATA",
             "masterDataVolumeSize": 600,
             "masterDataVolumeCount": 1,
             "coreDataVolumeType": "SATA",
             "coreDataVolumeSize": 600,
             "coreDataVolumeCount": 1,
            }
        ]
    }

Status Code

Table 6 describes the status code of this API.

Table 6 Status code

Status Code

Description

200

The cluster list information has been successfully queried.

For the description about error status codes, see Status Codes.

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback