Querying a Cluster List
Function
This API is used to query a list of clusters created by a user. This API is incompatible with Sahara.
URI
- Format
GET /v1.1/{project_id}/cluster_infos?pageSize={page_size}¤tPage={current_page}&clusterState={cluster_state}&tags={tags}&enterpriseProjectId={enterpriseProjectId}
- Parameter description
Table 1 URI parameter description Parameter
Mandatory
Description
project_id
Yes
Project ID. For details on how to obtain the project ID, see Obtaining a Project ID.
pageSize
No
Maximum number of clusters displayed on a page
Value range: 1 to 2147483646
currentPage
No
Current page number
clusterState
No
You can query a cluster list by cluster status.
- starting: Query a list of clusters that are being started.
- running: Query a list of running clusters.
- terminated: Query a list of terminated clusters.
- failed: Query a list of failed clusters.
- abnormal: Query a list of abnormal clusters.
- terminating: Query a list of clusters that are being terminated.
- frozen: Query a list of frozen clusters.
- scaling-out: Query a list of clusters that are being scaled out.
- scaling-in: Query a list of clusters that are being scaled in.
tags
No
You can search for a cluster by its tag. If you specify multiple tags, the relationship between them is AND.
- The format of the tags parameter is tags=k1*v1,k2*v2,k3*v3.
- When the values of some tags are null, the format is tags=k1,k2,k3*v3.
Request
None.
Response
| Parameter | Type | Description |
|---|---|---|
| clusterTotal | String | Total number of clusters in a list |
| clusters | Array | Cluster parameters. For details, see Table 3. |
| Parameter | Type | Description |
|---|---|---|
| clusterId | String | Cluster ID. |
| clusterName | String | Cluster name. |
| masterNodeNum | String | Number of Master nodes deployed in a cluster. |
| coreNodeNum | String | Number of Core nodes deployed in a cluster. |
| totalNodeNum | String | Total number of nodes deployed in a cluster. |
| clusterState | String | Cluster status. Valid values include:
|
| createAt | String | Cluster creation time, which is a 10-bit timestamp. |
| updateAt | String | Cluster update time, which is a 10-bit timestamp. |
| billingType | String | Cluster billing mode. |
| dataCenter | String | Cluster work region. |
| vpc | String | VPC name. |
| duration | String | Cluster subscription duration. |
| fee | String | Cluster creation fee, which is automatically calculated. |
| hadoopVersion | String | Hadoop version. |
| masterNodeSize | String | Instance specifications of a Master node. |
| coreNodeSize | String | Instance specifications of a Core node. |
| componentList | Array | Component list. For details, see Table 4. |
| externalIp | String | External IP address. |
| externalAlternateIp | String | Backup external IP address. |
| internalIp | String | Internal IP address. |
| deploymentId | String | Cluster deployment ID. |
| remark | String | Cluster remarks. |
| orderId | String | Cluster creation order ID. |
| azId | String | AZ ID. |
| masterNodeProductId | String | Product ID of a Master node. |
| masterNodeSpecId | String | Specification ID of a Master node. |
| coreNodeProductId | String | Product ID of a Core node. |
| coreNodeSpecId | String | Specification ID of a Core node. |
| azName | String | AZ name. |
| instanceId | String | Instance ID. |
| vnc | String | URI for remotely logging in to an ECS. |
| tenantId | String | Project ID. |
| volumeSize | Integer | Disk storage space. |
| volumeType | String | Disk type. |
| subnetId | String | Subnet ID. |
| enterpriseProjectId | String | Enterprise project ID. |
| clusterType | String | Cluster type. |
| subnetName | String | Subnet name. |
| securityGroupsId | String | Security group ID. |
| slaveSecurityGroupsId | String | Security group ID of a non-Master node. Currently, one MRS cluster uses only one security group. Therefore, this field has been discarded. |
| stageDesc | String | Cluster operation progress description. The cluster installation progress includes:
The cluster scale-out progress includes:
The cluster scale-in progress includes:
If the cluster installation, scale-out, or scale-in fails, stageDesc will display the failure cause. For details, see Table 8. |
| mrsManagerFinish | boolean | Whether MRS Manager installation is finished during cluster creation.
|
| safeMode | String | Running mode of an MRS cluster.
|
| clusterVersion | String | Cluster version. |
| nodePublicCertName | String | Name of the key file. |
| masterNodeIp | String | IP address of a Master node. |
| privateIpFirst | String | Preferred private IP address. |
| errorInfo | String | Error message. |
| chargingStartTime | String | Start time of billing. |
| logCollection | Integer | Whether to collect logs when cluster installation fails.
|
| taskNodeGroups | List<NodeGroup> | List of Task nodes. For more parameter description, see Table 5. |
| nodeGroups | List<NodeGroup> | List of Master, Core and Task nodes. For more parameter description, see Table 5. |
| masterDataVolumeType | String | Data disk storage type of the Master node. Currently, SATA, SAS, and SSD are supported. |
| masterDataVolumeSize | Integer | Data disk storage space of the Master node To increase data storage capacity, you can add disks at the same time when creating a cluster. Value range: 100 GB to 32,000 GB |
| masterDataVolumeCount | Integer | Number of data disks of the Master node The value can be set to 1 only. |
| coreDataVolumeType | String | Data disk storage type of the Core node. Currently, SATA, SAS, and SSD are supported. |
| coreDataVolumeSize | Integer | Data disk storage space of the Core node. To increase data storage capacity, you can add disks at the same time when creating a cluster. Value range: 100 GB to 32,000 GB |
| coreDataVolumeCount | Integer | Number of data disks of the Core node. Value range: 1 to 10 |
| periodType | Integer | Whether the subscription type is yearly or monthly.
|
| Parameter | Type | Description |
|---|---|---|
| groupName | String | Node group name |
| nodeNum | Integer | Number of nodes. The value ranges from 0 to 500. The minimum number of Master and Core nodes is 1 and the total number of Core and Task nodes cannot exceed 500. |
| nodeSize | String | Instance specifications of a node |
| nodeSpecId | String | Instance specification ID of a node |
| nodeProductId | String | Instance product ID of a node |
| vmProductId | String | VM product ID of a node |
| vmSpecCode | String | VM specifications of a node |
| rootVolumeSize | Integer | System disk size of a node. This parameter is not configurable and its default value is 40 GB. |
| rootVolumeProductId | String | System disk product ID of a node |
| rootVolumeType | String | System disk type of a node |
| rootVolumeResourceSpecCode | String | System disk product specifications of a node |
| rootVolumeResourceType | String | System disk product type of a node |
| dataVolumeType | String | Data disk storage type of a node. Currently, SATA, SAS, and SSD are supported.
|
| dataVolumeCount | Integer | Number of data disks of a node |
| dataVolumeSize | String | Data disk storage space of a node |
| dataVolumeProductId | String | Data disk product ID of a node |
| dataVolumeResourceSpecCode | String | Data disk product specifications of a node |
| dataVolumeResourceType | String | Data disk product type of a node |
Example
- Example request
- Example response
{ "clusterTotal": 1, "clusters": [ { "clusterId": "bc134369-294c-42b7-a707-b2036ba38524", "clusterName": "mrs_D0zW", "masterNodeNum": "2", "coreNodeNum": "3", "clusterState": "terminated", "createAt": "1498272043", "updateAt": "1498636753", "chargingStartTime": "1498273733", "logCollection": 1, "billingType": "Metered", "dataCenter": "cn-north-1", "vpc": null, "duration": "0", "fee": null, "hadoopVersion": null, "masterNodeSize": null, "coreNodeSize": null, "componentList": [{ "id": null, "componentId": "MRS 3.0.5_001", "componentName": "Hadoop", "componentVersion": "3.1.1", "external_datasources": null, "componentDesc": "A distributed data processing framework for big data sets", "componentDescEn": null }, { "id": null, "componentId": "MRS 3.0.5_002", "componentName": "HBase", "componentVersion": "2.2.3", "external_datasources": null, "componentDesc": "HBase is a column-based distributed storage system that features high reliability, performance, and scalability", "componentDescEn": null }, { "id": null, "componentId": "MRS 3.0.5_003", "componentName": "Hive", "componentVersion": "3.1.0", "external_datasources": null, "componentDesc": "A data warehouse software that facilitates query and management of big data sets stored in distributed storage systems" "componentDescEn": null }, { "id": null, "componentId": "MRS 3.0.5_004", "componentName": "Spark2x", "componentVersion": "2.4.5", "external_datasources": null, "componentDesc": "Spark2x is a fast general-purpose engine for large-scale data processing. It is developed based on the open-source Spark2.x version.", "componentDescEn": null }, { "id": null, "componentId": "MRS 3.0.5_005", "componentName": "Tez", "componentVersion": "0.9.2", "external_datasources": null, "componentDesc": "An application framework which allows for a complex directed-acyclic-graph of tasks for processing data.", "componentDescEn": null }, { "id": null, "componentId": "MRS 3.0.5_006", "componentName": "Flink", "componentVersion": "1.12.0", "external_datasources": null, "componentDesc": "Flink is an open-source message processing system that integrates streams in batches.", "componentDescEn": null }, { "id": null, "componentId": "MRS 3.0.5_008", "componentName": "Kafka", "componentVersion": "2.11-2.4.0", "external_datasources": null, "componentDesc": "Kafka is a distributed message release and subscription system.", "componentDescEn": null }, { "id": null, "componentId": "MRS 3.0.5_009", "componentName": "Flume", "componentVersion": "1.9.0", "external_datasources": null, "componentDesc": "Flume is a distributed, reliable, and highly available service for efficiently collecting, aggregating, and moving large amounts of log data", "componentDescEn": null }, { "id": null, "componentId": "MRS 3.0.5_013", "componentName": "Loader", "componentVersion": "1.99.3", "external_datasources": null, "componentDesc": "Loader is a tool designed for efficiently transmitting a large amount of data between Apache Hadoop and structured databases (such as relational databases).", "componentDescEn": null }, { "id": null, "componentId": "MRS 3.0.5_014", "componentName": "Hue", "componentVersion": "4.7.0", "external_datasources": null, "componentDesc": "Apache Hadoop UI", "componentDescEn": null }, { "id": null, "componentId": "MRS 3.0.5_015", "componentName": "Oozie", "componentVersion": "5.1.0", "external_datasources": null, "componentDesc": "A Hadoop job scheduling system", "componentDescEn": null }, { "id": null, "componentId": "MRS 3.0.5_022", "componentName": "Ranger", "componentVersion": "2.0.0", "external_datasources": null, "componentDesc": "Ranger is a centralized framework based on the Hadoop platform. It provides permission control interfaces such as monitoring, operation, and management interfaces for complex data.", "componentDescEn": null }], "externalIp": null, "externalAlternateIp": null, "internalIp": null, "deploymentId": null, "remark": "", "orderId": null, "azId": null, "masterNodeProductId": null, "masterNodeSpecId": null, "coreNodeProductId": null, "coreNodeSpecId": null, "azName": "az1.dc1", "instanceId": null, "vnc": "v2/5a3314075bfa49b9ae360f4ecd333695/servers/e2cda891-232e-4703-995e-3b1406add01d/action", "tenantId": null, "volumeSize": 0, "volumeType": null, "subnetId": null, "subnetName": null, "securityGroupsId": null, "slaveSecurityGroupsId": null, "mrsManagerFinish": false, "stageDesc": "Installing MRS Manager", "safeMode": 0, "clusterVersion": null, "nodePublicCertName": null, "masterNodeIp": "unknown", "privateIpFirst": null, "errorInfo": "", "clusterType": 0, "enterpriseProjectId": "0", "nodeGroups": [ { "groupName": "master_node_default_group", "nodeNum": 1, "nodeSize": "s3.xlarge.2.linux.bigdata", "nodeSpecId": "cdc6035a249a40249312f5ef72a23cd7", "vmProductId": "", "vmSpecCode": null, "nodeProductId": "dc970349d128460e960a0c2b826c427c", "rootVolumeSize": 40, "rootVolumeProductId": "16c1dcf0897249758b1ec276d06e0572", "rootVolumeType": "SATA", "rootVolumeResourceSpecCode": "", "rootVolumeResourceType": "", "dataVolumeType": "SATA", "dataVolumeCount": 1, "dataVolumeSize": 100, "dataVolumeProductId": "16c1dcf0897249758b1ec276d06e0572", "dataVolumeResourceSpecCode": "", "dataVolumeResourceType": "", }, { "groupName": "core_node_analysis_group", "nodeNum": 1, "nodeSize": "s3.xlarge.2.linux.bigdata", "nodeSpecId": "cdc6035a249a40249312f5ef72a23cd7", "vmProductId": "", "vmSpecCode": null, "nodeProductId": "dc970349d128460e960a0c2b826c427c", "rootVolumeSize": 40, "rootVolumeProductId": "16c1dcf0897249758b1ec276d06e0572", "rootVolumeType": "SATA", "rootVolumeResourceSpecCode": "", "rootVolumeResourceType": "", "dataVolumeType": "SATA", "dataVolumeCount": 1, "dataVolumeSize": 100, "dataVolumeProductId": "16c1dcf0897249758b1ec276d06e0572", "dataVolumeResourceSpecCode": "", "dataVolumeResourceType": "", }, { "groupName": "task_node_analysis_group", "nodeNum": 1, "nodeSize": "s3.xlarge.2.linux.bigdata", "nodeSpecId": "cdc6035a249a40249312f5ef72a23cd7", "vmProductId": "", "vmSpecCode": null, "nodeProductId": "dc970349d128460e960a0c2b826c427c", "rootVolumeSize": 40, "rootVolumeProductId": "16c1dcf0897249758b1ec276d06e0572", "rootVolumeType": "SATA", "rootVolumeResourceSpecCode": "", "rootVolumeResourceType": "", "dataVolumeType": "SATA", "dataVolumeCount": 1, "dataVolumeSize": 100, "dataVolumeProductId": "16c1dcf0897249758b1ec276d06e0572", "dataVolumeResourceSpecCode": "", "dataVolumeResourceType": "", } ], "taskNodeGroups": [ { "groupName": "task_node_default_group", "nodeNum": 1, "nodeSize": "s3.xlarge.2.linux.bigdata", "nodeSpecId": "cdc6035a249a40249312f5ef72a23cd7", "vmProductId": "", "vmSpecCode": null, "nodeProductId": "dc970349d128460e960a0c2b826c427c", "rootVolumeSize": 40, "rootVolumeProductId": "16c1dcf0897249758b1ec276d06e0572", "rootVolumeType": "SATA", "rootVolumeResourceSpecCode": "", "rootVolumeResourceType": "", "dataVolumeType": "SATA", "dataVolumeCount": 1, "dataVolumeSize": 100, "dataVolumeProductId": "16c1dcf0897249758b1ec276d06e0572", "dataVolumeResourceSpecCode": "", "dataVolumeResourceType": "", } ], "masterDataVolumeType": "SATA", "masterDataVolumeSize": 200, "masterDataVolumeCount": 1, "coreDataVolumeType": "SATA", "coreDataVolumeSize": 100, "coreDataVolumeCount": 1, "periodType": 0 } ] }
Status Code
Table 6 describes the status code of this API.
| Status Code | Description |
|---|---|
| 200 | The cluster list information has been successfully queried. |
For the description about error status codes, see Status Codes.
Last Article: Resizing a Cluster
Next Article: Deleting a Cluster
Did this article solve your problem?
Thank you for your score!Your feedback would help us improve the website.