Querying a Cluster List
Function
This API is used to query a list of clusters created by a user. This API is incompatible with Sahara.
URI
- Format
GET /v1.1/{project_id}/cluster_infos?pageSize={page_size}¤tPage={current_page}&clusterState={cluster_state}&tags={tags}
- Parameter description
Table 1 URI parameter description Parameter
Mandatory
Description
project_id
Yes
Project ID. For details on how to obtain the project ID, see Obtaining a Project ID.
pageSize
No
Maximum number of clusters displayed on a page
Value range: 1 to 2147483646
currentPage
No
Current page number
clusterState
No
You can query a cluster list by cluster status.
- starting: Query a list of clusters that are being started.
- running: Query a list of running clusters.
- terminated: Query a list of terminated clusters.
- failed: Query a list of failed clusters.
- abnormal: Query a list of abnormal clusters.
- terminating: Query a list of clusters that are being terminated.
- frozen: Query a list of frozen clusters.
- scaling-out: Query a list of clusters that are being scaled out.
- scaling-in: Query a list of clusters that are being scaled in.
tags
No
You can search for a cluster by its tag. If you specify multiple tags, the relationship between them is AND.
- The format of the tags parameter is tags=k1*v1,k2*v2,k3*v3.
- When the values of some tags are null, the format is tags=k1,k2,k3*v3.
Request
None.
Response
Parameter |
Type |
Description |
---|---|---|
clusterTotal |
String |
Total number of clusters in a list |
clusters |
Array |
Cluster parameters. For details, see Table 3. |
Parameter |
Type |
Description |
---|---|---|
clusterId |
String |
Cluster ID. |
clusterName |
String |
Cluster name. |
masterNodeNum |
String |
Number of Master nodes deployed in a cluster. |
coreNodeNum |
String |
Number of Core nodes deployed in a cluster. |
totalNodeNum |
String |
Total number of nodes deployed in a cluster. |
clusterState |
String |
Cluster status. Valid values include:
|
createAt |
String |
Cluster creation time, which is a 10-bit timestamp. |
updateAt |
String |
Cluster update time, which is a 10-bit timestamp. |
dataCenter |
String |
Cluster work region. |
vpc |
String |
VPC name. |
vpcId |
String |
VPC ID. |
hadoopVersion |
String |
Hadoop version. |
masterNodeSize |
String |
Instance specifications of a Master node. |
coreNodeSize |
String |
Instance specifications of a Core node. |
componentList |
Array |
Component list. For details, see Table 4. |
externalIp |
String |
External IP address. |
externalAlternateIp |
String |
Backup external IP address. |
internalIp |
String |
Internal IP address. |
deploymentId |
String |
Cluster deployment ID. |
remark |
String |
Cluster remarks. |
orderId |
String |
Cluster creation order ID. |
azId |
String |
AZ ID. |
masterNodeProductId |
String |
Product ID of a Master node. |
masterNodeSpecId |
String |
Specification ID of a Master node. |
coreNodeProductId |
String |
Product ID of a Core node. |
coreNodeSpecId |
String |
Specification ID of a Core node. |
azName |
String |
AZ name. |
instanceId |
String |
Instance ID. |
vnc |
String |
URI for remotely logging in to an ECS. |
tenantId |
String |
Project ID. |
volumeSize |
Integer |
Disk storage space. |
volumeType |
String |
Disk type. |
subnetId |
String |
Subnet ID. |
clusterType |
String |
Cluster type. |
subnetName |
String |
Subnet name. |
securityGroupsId |
String |
Security group ID. |
slaveSecurityGroupsId |
String |
Security group ID of a non-Master node. Currently, one MRS cluster uses only one security group. Therefore, this field has been discarded. |
stageDesc |
String |
Cluster operation progress description.
The cluster installation progress includes:
The cluster scale-out progress includes:
The cluster scale-in progress includes:
If the cluster installation, scale-out, or scale-in fails, stageDesc will display the failure cause. For details, see Table 8. |
mrsManagerFinish |
boolean |
Whether MRS Manager installation is finished during cluster creation.
|
safeMode |
Integer |
Running mode of an MRS cluster.
|
clusterVersion |
String |
Cluster version. |
nodePublicCertName |
String |
Name of the key file. |
masterNodeIp |
String |
IP address of a Master node. |
privateIpFirst |
String |
Preferred private IP address. |
errorInfo |
String |
Error message. |
logCollection |
Integer |
Whether to collect logs when cluster installation fails.
|
taskNodeGroups |
List<NodeGroup> |
List of Task nodes. For more parameter description, see Table 5. |
nodeGroups |
List<NodeGroup> |
List of Master, Core and Task nodes. For more parameter description, see Table 5. |
masterDataVolumeType |
String |
Data disk storage type of the Master node. Currently, SATA, SAS, and SSD are supported. |
masterDataVolumeSize |
Integer |
Data disk storage space of the Master node To increase data storage capacity, you can add disks at the same time when creating a cluster. Value range: 100 GB to 32,000 GB |
masterDataVolumeCount |
Integer |
Number of data disks of the Master node The value can be set to 1 only. |
coreDataVolumeType |
String |
Data disk storage type of the Core node. Currently, SATA, SAS, and SSD are supported. |
coreDataVolumeSize |
Integer |
Data disk storage space of the Core node. To increase data storage capacity, you can add disks at the same time when creating a cluster. Value range: 100 GB to 32,000 GB |
coreDataVolumeCount |
Integer |
Number of data disks of the Core node. Value range: 1 to 10 |
Parameter |
Type |
Description |
---|---|---|
componentId |
String |
Component ID For example, component_id of Hadoop is MRS 3.1.2-LTS.3_001, MRS 3.1.0-LTS.1_001, MRS 2.0.1_001, and MRS 1.8.9_001. |
componentName |
String |
Component name |
componentVersion |
String |
Component version |
componentDesc |
String |
Component description |
Parameter |
Type |
Description |
---|---|---|
groupName |
String |
Node group name. |
nodeNum |
Integer |
Number of nodes. The value ranges from 0 to 500. The minimum number of Master and Core nodes is 1 and the total number of Core and Task nodes cannot exceed 500. |
nodeSize |
String |
Instance specifications of a node. |
nodeSpecId |
String |
Instance specification ID of a node |
nodeProductId |
String |
Instance product ID of a node |
vmProductId |
String |
VM product ID of a node |
vmSpecCode |
String |
VM specifications of a node |
rootVolumeSize |
Integer |
System disk size of a node. This parameter is not configurable and its default value is 40 GB. |
rootVolumeProductId |
String |
System disk product ID of a node |
rootVolumeType |
String |
System disk type of a node |
rootVolumeResourceSpecCode |
String |
System disk product specifications of a node |
rootVolumeResourceType |
String |
System disk product type of a node |
dataVolumeType |
String |
Data disk storage type of a node. Currently, SATA, SAS, and SSD are supported.
|
dataVolumeCount |
Integer |
Number of data disks of a node. |
dataVolumeSize |
Integer |
Data disk storage space of a node. |
dataVolumeProductId |
String |
Data disk product ID of a node |
dataVolumeResourceSpecCode |
String |
Data disk product specifications of a node |
dataVolumeResourceType |
String |
Data disk product type of a node |
Example
- Example request
- Example response
{ "clusterTotal": 1, "clusters": [ { "clusterId": "bc134369-294c-42b7-a707-b2036ba38524", "clusterName": "mrs_D0zW", "masterNodeNum": "2", "coreNodeNum": "3", "clusterState": "terminated", "createAt": "1498272043", "updateAt": "1498636753", "chargingStartTime": "1498273733", "logCollection": 1, "billingType": "Metered", "dataCenter": "eu-west-0", "vpc": null, "duration": "0", "fee": null, "hadoopVersion": null, "masterNodeSize": null, "coreNodeSize": null, "componentList": [{ "id": null, "componentId": "MRS 3.1.0-LTS.1_001", "componentName": "Hadoop", "componentVersion": "3.1.1", "external_datasources": null, "componentDesc": "A distributed data processing framework for big data sets", "componentDescEn": null }, { "id": null, "componentId": "MRS 3.1.0-LTS.1_002", "componentName": "HBase", "componentVersion": "2.2.3", "external_datasources": null, "componentDesc": "HBase is a column-based distributed storage system that features high reliability, performance, and scalability", "componentDescEn": null }, { "id": null, "componentId": "MRS 3.1.0-LTS.1_003", "componentName": "Hive", "componentVersion": "3.1.0", "external_datasources": null, "componentDesc": "A data warehouse software that facilitates query and management of big data sets stored in distributed storage systems" "componentDescEn": null }, { "id": null, "componentId": "MRS 3.1.0-LTS.1_004", "componentName": "Spark2x", "componentVersion": "2.4.5", "external_datasources": null, "componentDesc": "Spark2x is a fast general-purpose engine for large-scale data processing. It is developed based on the open-source Spark2.x version.", "componentDescEn": null }, { "id": null, "componentId": "MRS 3.1.0-LTS.1_005", "componentName": "Tez", "componentVersion": "0.9.2", "external_datasources": null, "componentDesc": "An application framework which allows for a complex directed-acyclic-graph of tasks for processing data.", "componentDescEn": null }, { "id": null, "componentId": "MRS 3.1.0-LTS.1_006", "componentName": "Flink", "componentVersion": "1.12.0", "external_datasources": null, "componentDesc": "Flink is an open-source message processing system that integrates streams in batches.", "componentDescEn": null }, { "id": null, "componentId": "MRS 3.1.0-LTS.1_008", "componentName": "Kafka", "componentVersion": "2.11-2.4.0", "external_datasources": null, "componentDesc": "Kafka is a distributed message release and subscription system.", "componentDescEn": null }, { "id": null, "componentId": "MRS 3.1.0-LTS.1_009", "componentName": "Flume", "componentVersion": "1.9.0", "external_datasources": null, "componentDesc": "Flume is a distributed, reliable, and highly available service for efficiently collecting, aggregating, and moving large amounts of log data", "componentDescEn": null }, { "id": null, "componentId": "MRS 3.1.0-LTS.1_014", "componentName": "Hue", "componentVersion": "4.7.0", "external_datasources": null, "componentDesc": "Apache Hadoop UI", "componentDescEn": null }, { "id": null, "componentId": "MRS 3.1.0-LTS.1_015", "componentName": "Oozie", "componentVersion": "5.1.0", "external_datasources": null, "componentDesc": "A Hadoop job scheduling system", "componentDescEn": null }, { "id": null, "componentId": "MRS 3.1.0-LTS.1_022", "componentName": "Ranger", "componentVersion": "2.0.0", "external_datasources": null, "componentDesc": "Ranger is a centralized framework based on the Hadoop platform. It provides permission control interfaces such as monitoring, operation, and management interfaces for complex data.", "componentDescEn": null }], "externalIp": null, "externalAlternateIp": null, "internalIp": null, "deploymentId": null, "remark": "", "orderId": null, "azId": null, "masterNodeProductId": null, "masterNodeSpecId": null, "coreNodeProductId": null, "coreNodeSpecId": null, "azName": "eu-west-0a", "instanceId": null, "vnc": "v2/5a3314075bfa49b9ae360f4ecd333695/servers/e2cda891-232e-4703-995e-3b1406add01d/action", "tenantId": null, "volumeSize": 0, "volumeType": null, "subnetId": null, "subnetName": null, "securityGroupsId": null, "slaveSecurityGroupsId": null, "mrsManagerFinish": false, "stageDesc": "Installing MRS Manager", "safeMode": 0, "clusterVersion": null, "nodePublicCertName": null, "masterNodeIp": "unknown", "privateIpFirst": null, "errorInfo": "", "clusterType": 0, "nodeGroups": [ { "groupName": "master_node_default_group", "nodeNum": 1, "nodeSize": "s1.xlarge.linux.mrs", "nodeSpecId": "cdc6035a249a40249312f5ef72a23cd7", "vmProductId": "", "vmSpecCode": null, "nodeProductId": "dc970349d128460e960a0c2b826c427c", "rootVolumeSize": 480, "rootVolumeProductId": "16c1dcf0897249758b1ec276d06e0572", "rootVolumeType": "SATA", "rootVolumeResourceSpecCode": "", "rootVolumeResourceType": "", "dataVolumeType": "SATA", "dataVolumeCount": 1, "dataVolumeSize": 600, "dataVolumeProductId": "16c1dcf0897249758b1ec276d06e0572", "dataVolumeResourceSpecCode": "", "dataVolumeResourceType": "", }, { "groupName": "core_node_analysis_group", "nodeNum": 1, "nodeSize": "s1.xlarge.linux.mrs", "nodeSpecId": "cdc6035a249a40249312f5ef72a23cd7", "vmProductId": "", "vmSpecCode": null, "nodeProductId": "dc970349d128460e960a0c2b826c427c", "rootVolumeSize": 480, "rootVolumeProductId": "16c1dcf0897249758b1ec276d06e0572", "rootVolumeType": "SATA", "rootVolumeResourceSpecCode": "", "rootVolumeResourceType": "", "dataVolumeType": "SATA", "dataVolumeCount": 1, "dataVolumeSize": 600, "dataVolumeProductId": "16c1dcf0897249758b1ec276d06e0572", "dataVolumeResourceSpecCode": "", "dataVolumeResourceType": "", }, { "groupName": "task_node_analysis_group", "nodeNum": 1, "nodeSize": "s1.xlarge.linux.mrs", "nodeSpecId": "cdc6035a249a40249312f5ef72a23cd7", "vmProductId": "", "vmSpecCode": null, "nodeProductId": "dc970349d128460e960a0c2b826c427c", "rootVolumeSize": 480, "rootVolumeProductId": "16c1dcf0897249758b1ec276d06e0572", "rootVolumeType": "SATA", "rootVolumeResourceSpecCode": "", "rootVolumeResourceType": "", "dataVolumeType": "SATA", "dataVolumeCount": 1, "dataVolumeSize": 600, "dataVolumeProductId": "16c1dcf0897249758b1ec276d06e0572", "dataVolumeResourceSpecCode": "", "dataVolumeResourceType": "", } ], "taskNodeGroups": [ { "groupName": "task_node_default_group", "nodeNum": 1, "nodeSize": "s1.xlarge.linux.mrs", "nodeSpecId": "cdc6035a249a40249312f5ef72a23cd7", "vmProductId": "", "vmSpecCode": null, "nodeProductId": "dc970349d128460e960a0c2b826c427c", "rootVolumeSize": 480, "rootVolumeProductId": "16c1dcf0897249758b1ec276d06e0572", "rootVolumeType": "SATA", "rootVolumeResourceSpecCode": "", "rootVolumeResourceType": "", "dataVolumeType": "SATA", "dataVolumeCount": 1, "dataVolumeSize": 600, "dataVolumeProductId": "16c1dcf0897249758b1ec276d06e0572", "dataVolumeResourceSpecCode": "", "dataVolumeResourceType": "", } ], "masterDataVolumeType": "SATA", "masterDataVolumeSize": 600, "masterDataVolumeCount": 1, "coreDataVolumeType": "SATA", "coreDataVolumeSize": 600, "coreDataVolumeCount": 1, } ] }
Status Code
Table 6 describes the status code of this API.
Status Code |
Description |
---|---|
200 |
The cluster list information has been successfully queried. |
For the description about error status codes, see Status Codes.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot