Querying a Cluster List
Function
This API is used to query a list of clusters created by a user. This API is incompatible with Sahara.
URI
- Format
- Parameter description
Table 1 URI parameter Parameter
Mandatory
Type
Description
project_id
Yes
String
The project ID. For details about how to obtain the project ID, see Obtaining a Project ID.
Table 2 Query parameters Parameter
Mandatory
Type
Description
pageSize
No
String
Maximum number of clusters displayed on a page
Value range: 1 to 2147483646
currentPage
No
String
Current page number
clusterName
No
String
The cluster name.
clusterState
No
String
You can query a cluster list by cluster status.
- existing: Query a list of existing clusters except the terminated clusters.
- history: Query a list of history clusters, including the terminated clusters, clusters that fail to be terminated, clusters that fail to delete VMs, and clusters that fail to terminate a database update.
- starting: Query a list of clusters that are being started.
- running: Query a list of running clusters.
- terminated: Query a list of terminated clusters.
- failed: Query a list of failed clusters.
- abnormal: Query a list of abnormal clusters.
- terminating: Query a list of clusters that are being terminated.
- frozen: Query a list of frozen clusters.
- scaling-out: Query a list of clusters that are being scaled out.
- scaling-in: Query a list of clusters that are being scaled in.
tags
No
String
You can search for a cluster by its tag. If you specify multiple tags, the relationship between them is AND.
- The format of the tags parameter is tags=k1*v1,k2*v2,k3*v3.
- When the values of some tags are null, the format is tags=k1,k2,k3*v3.
enterpriseProjectId
No
String
The enterprise project ID used to query clusters in a specified enterprise project.
The default value is 0, indicating the default enterprise project.
Request Parameters
None
Response Parameters
Parameter |
Type |
Description |
---|---|---|
clusterTotal |
String |
Total number of clusters in a list |
clusters |
Array of Cluster objects |
Cluster parameters. For details, see Table 4. |
Parameter |
Type |
Description |
---|---|---|
clusterId |
String |
Cluster ID. |
clusterName |
String |
Cluster name. |
masterNodeNum |
String |
Number of Master nodes deployed in a cluster. |
coreNodeNum |
String |
Number of Core nodes deployed in a cluster. |
totalNodeNum |
String |
Total number of nodes deployed in a cluster. |
clusterState |
String |
Cluster status. Valid values include:
|
createAt |
String |
Cluster creation time, which is a 10-bit timestamp. |
updateAt |
String |
Cluster update time, which is a 10-bit timestamp. |
dataCenter |
String |
Cluster work region. |
vpc |
String |
VPC name. |
vpcId |
String |
VPC ID. |
hadoopVersion |
String |
Hadoop version. |
masterNodeSize |
String |
Instance specifications of a Master node. |
coreNodeSize |
String |
Instance specifications of a Core node. |
componentList |
Array |
Component list. For details, see Table 5. |
externalIp |
String |
External IP address. |
externalAlternateIp |
String |
Backup external IP address. |
internalIp |
String |
Internal IP address. |
deploymentId |
String |
Cluster deployment ID. |
remark |
String |
Cluster remarks. |
orderId |
String |
Cluster creation order ID. |
azId |
String |
AZ ID. |
masterNodeProductId |
String |
Product ID of a Master node. |
masterNodeSpecId |
String |
Specification ID of a Master node. |
coreNodeProductId |
String |
Product ID of a Core node. |
coreNodeSpecId |
String |
Specification ID of a Core node. |
azName |
String |
AZ name. |
azCode |
String |
AZ name (en). |
availabilityZoneId |
String |
The AZ. |
instanceId |
String |
Instance ID. |
vnc |
String |
URI for remotely logging in to an ECS. |
tenantId |
String |
Project ID. |
volumeSize |
Integer |
Disk storage space. |
volumeType |
String |
Disk type. |
subnetId |
String |
Subnet ID. |
clusterType |
String |
Cluster type. |
subnetName |
String |
Subnet name. |
securityGroupsId |
String |
Security group ID. |
slaveSecurityGroupsId |
String |
Security group ID of a non-Master node. Currently, one MRS cluster uses only one security group. Therefore, this field has been discarded. |
bootstrapScripts |
Array of Table BootstrapScript objects |
The bootstrap action script information. |
stageDesc |
String |
Cluster operation progress description.
The cluster installation progress includes:
The cluster scale-out progress includes:
The cluster scale-in progress includes:
If the cluster installation, scale-out, or scale-in fails, stageDesc will display the failure cause. |
ismrsManagerFinish |
Boolean |
Whether MRS Manager installation is finished during cluster creation.
|
safeMode |
Integer |
Running mode of an MRS cluster.
|
clusterVersion |
String |
Cluster version. |
nodePublicCertName |
String |
Name of the key file. |
masterNodeIp |
String |
IP address of a Master node. |
privateIpFirst |
String |
Preferred private IP address. |
errorInfo |
String |
Error message. |
tags |
String |
The tag information. |
logCollection |
Integer |
Whether to collect logs when cluster installation fails.
|
taskNodeGroups |
List<NodeGroup> |
List of Task nodes. For more parameter description, see Table 7. |
nodeGroups |
List<NodeGroup> |
List of Master, Core and Task nodes. For more parameter description, see Table 7. |
masterDataVolumeType |
String |
Data disk storage type of the Master node. Currently, SATA, SAS, and SSD are supported. |
masterDataVolumeSize |
Integer |
Data disk storage space of the Master node To increase data storage capacity, you can add disks at the same time when creating a cluster. Value range: 100 GB to 32,000 GB |
masterDataVolumeCount |
Integer |
Number of data disks of the Master node The value can be set to 1 only. |
coreDataVolumeType |
String |
Data disk storage type of the Core node. Currently, SATA, SAS, and SSD are supported. |
coreDataVolumeSize |
Integer |
Data disk storage space of the Core node. To increase data storage capacity, you can add disks at the same time when creating a cluster. Value range: 100 GB to 32,000 GB |
coreDataVolumeCount |
Integer |
Number of data disks of the Core node. Value range: 1 to 10 |
scale |
String |
The node change status. If this parameter is left blank, the cluster nodes are not changed. Possible values:
|
Parameter |
Type |
Description |
---|---|---|
componentId |
String |
Component ID For example, the component_id of Hadoop is MRS 3.1.0_001. |
componentName |
String |
Component name |
componentVersion |
String |
Component version |
componentDesc |
String |
Component description |
Parameter |
Type |
Description |
---|---|---|
name |
String |
The name of a bootstrap action script, which must be unique in a cluster. The value can contain only numbers, letters, spaces, hyphens (-), and underscores (_) and cannot start with a space. The value can contain 1 to 64 characters. |
uri |
String |
The path of a bootstrap action script. Set this parameter to an OBS bucket path or a local VM path.
|
parameters |
String |
The bootstrap action script parameters. |
nodes |
Array of strings |
Type of a node where the Bootstrap action script is executed. The value can be master, core, or task. The node type must be represented in lowercase letters. |
active_master |
Boolean |
Whether the bootstrap action script runs only on active master nodes. The default value is false, indicating that the Bootstrap action script can run on all master nodes. |
fail_action |
String |
Whether to continue executing subsequent scripts and creating a cluster after the bootstrap action script fails to execute. The default value is errorout, indicating that the action is stopped. Note: You are advised to set this parameter to continue in the commissioning phase so that the cluster can continue to be installed and started no matter whether the bootstrap action is successful. Possible values:
|
before_component_start |
Boolean |
Time when the bootstrap action script is executed. Currently, the following two options are available: Before component start and After component start. The default value is false, indicating that the bootstrap action script is executed after the component is started. |
action_stages |
Array of strings |
Select the time when the bootstrap action script is executed.
|
Parameter |
Type |
Description |
---|---|---|
GroupName |
String |
Node group name. |
NodeNum |
Integer |
Number of nodes. The value ranges from 0 to 500. The minimum number of Master and Core nodes is 1 and the total number of Core and Task nodes cannot exceed 500. |
NodeSize |
String |
Instance specifications of a node. |
NodeSpecId |
String |
Instance specification ID of a node |
NodeProductId |
String |
Instance product ID of a node |
VmProductId |
String |
VM product ID of a node |
VmSpecCode |
String |
VM specifications of a node |
RootVolumeSize |
Integer |
System disk size of a node. This parameter is not configurable and its default value is 40 GB. |
RootVolumeProductId |
String |
System disk product ID of a node |
RootVolumeType |
String |
System disk type of a node |
RootVolumeResourceSpecCode |
String |
System disk product specifications of a node |
RootVolumeResourceType |
String |
System disk product type of a node |
DataVolumeType |
String |
Data disk storage type of a node. Currently, SATA, SAS, and SSD are supported.
|
DataVolumeCount |
Integer |
Number of data disks of a node. |
DataVolumeSize |
Integer |
Data disk storage space of a node. |
DataVolumeProductId |
String |
Data disk product ID of a node |
DataVolumeResourceSpecCode |
String |
Data disk product specifications of a node |
DataVolumeResourceType |
String |
Data disk product type of a node |
Example
- Example request
GET /v1.1/{project_id}/cluster_infos?pageSize={page_size}¤tPage={current_page}&clusterState={cluster_state}&tags={tags}
- Example response
{ "clusterTotal": 1, "clusters": [ { "clusterId": "bc134369-294c-42b7-a707-b2036ba38524", "clusterName": "mrs_D0zW", "masterNodeNum": "2", "coreNodeNum": "3", "clusterState": "terminated", "createAt": "1498272043", "updateAt": "1498636753", "chargingStartTime": "1498273733", "logCollection": 1, "billingType": "Metered", "dataCenter": , "vpc": null, "duration": "0", "fee": null, "hadoopVersion": null, "masterNodeSize": null, "coreNodeSize": null, "componentList": [{ "id": null, "componentId": "MRS 3.1.0_001", "componentName": "Hadoop", "componentVersion": "3.1.1", "external_datasources": null, "componentDesc": "A distributed data processing framework for big data sets", "componentDescEn": null }, { "id": null, "componentId": "MRS 3.1.0_002", "componentName": "HBase", "componentVersion": "2.2.3", "external_datasources": null, "componentDesc": "HBase is a column-based distributed storage system that features high reliability, performance, and scalability", "componentDescEn": null }, { "id": null, "componentId": "MRS 3.1.0_003", "componentName": "Hive", "componentVersion": "3.1.0", "external_datasources": null, "componentDesc": "A data warehouse software that facilitates query and management of big data sets stored in distributed storage systems" "componentDescEn": null }, { "id": null, "componentId": "MRS 3.1.0_004", "componentName": "Spark2x", "componentVersion": "2.4.5", "external_datasources": null, "componentDesc": "Spark2x is a fast general-purpose engine for large-scale data processing. It is developed based on the open-source Spark2.x version.", "componentDescEn": null }, { "id": null, "componentId": "MRS 3.1.0_005", "componentName": "Tez", "componentVersion": "0.9.2", "external_datasources": null, "componentDesc": "An application framework which allows for a complex directed-acyclic-graph of tasks for processing data.", "componentDescEn": null }, { "id": null, "componentId": "MRS 3.1.0_006", "componentName": "Flink", "componentVersion": "1.12.0", "external_datasources": null, "componentDesc": "Flink is an open-source message processing system that integrates streams in batches.", "componentDescEn": null }, { "id": null, "componentId": "MRS 3.1.0_008", "componentName": "Kafka", "componentVersion": "2.11-2.4.0", "external_datasources": null, "componentDesc": "Kafka is a distributed message release and subscription system.", "componentDescEn": null }, { "id": null, "componentId": "MRS 3.1.0_009", "componentName": "Flume", "componentVersion": "1.9.0", "external_datasources": null, "componentDesc": "Flume is a distributed, reliable, and highly available service for efficiently collecting, aggregating, and moving large amounts of log data", "componentDescEn": null }, { "id": null, "componentId": "MRS 3.1.0_014", "componentName": "Hue", "componentVersion": "4.7.0", "external_datasources": null, "componentDesc": "Apache Hadoop UI", "componentDescEn": null }, { "id": null, "componentId": "MRS 3.1.0_015", "componentName": "Oozie", "componentVersion": "5.1.0", "external_datasources": null, "componentDesc": "A Hadoop job scheduling system", "componentDescEn": null }, { "id": null, "componentId": "MRS 3.1.0_022", "componentName": "Ranger", "componentVersion": "2.0.0", "external_datasources": null, "componentDesc": "Ranger is a centralized framework based on the Hadoop platform. It provides permission control interfaces such as monitoring, operation, and management interfaces for complex data.", "componentDescEn": null }], "externalIp": null, "externalAlternateIp": null, "internalIp": null, "deploymentId": null, "remark": "", "orderId": null, "azId": null, "masterNodeProductId": null, "masterNodeSpecId": null, "coreNodeProductId": null, "coreNodeSpecId": null, "azName": , "instanceId": null, "vnc": "v2/5a3314075bfa49b9ae360f4ecd333695/servers/e2cda891-232e-4703-995e-3b1406add01d/action", "tenantId": null, "volumeSize": 0, "volumeType": null, "subnetId": null, "subnetName": null, "securityGroupsId": null, "slaveSecurityGroupsId": null, "mrsManagerFinish": false, "stageDesc": "Installing MRS Manager", "safeMode": 0, "clusterVersion": null, "nodePublicCertName": null, "masterNodeIp": "unknown", "privateIpFirst": null, "errorInfo": "", "clusterType": 0, "nodeGroups": [ { "groupName": "master_node_default_group", "nodeNum": 1, "nodeSize": "", "nodeSpecId": "cdc6035a249a40249312f5ef72a23cd7", "vmProductId": "", "vmSpecCode": null, "nodeProductId": "dc970349d128460e960a0c2b826c427c", "rootVolumeSize": 480, "rootVolumeProductId": "16c1dcf0897249758b1ec276d06e0572", "rootVolumeType": "SATA", "rootVolumeResourceSpecCode": "", "rootVolumeResourceType": "", "dataVolumeType": "SATA", "dataVolumeCount": 1, "dataVolumeSize": 600, "dataVolumeProductId": "16c1dcf0897249758b1ec276d06e0572", "dataVolumeResourceSpecCode": "", "dataVolumeResourceType": "", }, { "groupName": "core_node_analysis_group", "nodeNum": 1, "nodeSize": "", "nodeSpecId": "cdc6035a249a40249312f5ef72a23cd7", "vmProductId": "", "vmSpecCode": null, "nodeProductId": "dc970349d128460e960a0c2b826c427c", "rootVolumeSize": 480, "rootVolumeProductId": "16c1dcf0897249758b1ec276d06e0572", "rootVolumeType": "SATA", "rootVolumeResourceSpecCode": "", "rootVolumeResourceType": "", "dataVolumeType": "SATA", "dataVolumeCount": 1, "dataVolumeSize": 600, "dataVolumeProductId": "16c1dcf0897249758b1ec276d06e0572", "dataVolumeResourceSpecCode": "", "dataVolumeResourceType": "", }, { "groupName": "task_node_analysis_group", "nodeNum": 1, "nodeSize": "", "nodeSpecId": "cdc6035a249a40249312f5ef72a23cd7", "vmProductId": "", "vmSpecCode": null, "nodeProductId": "dc970349d128460e960a0c2b826c427c", "rootVolumeSize": 480, "rootVolumeProductId": "16c1dcf0897249758b1ec276d06e0572", "rootVolumeType": "SATA", "rootVolumeResourceSpecCode": "", "rootVolumeResourceType": "", "dataVolumeType": "SATA", "dataVolumeCount": 1, "dataVolumeSize": 600, "dataVolumeProductId": "16c1dcf0897249758b1ec276d06e0572", "dataVolumeResourceSpecCode": "", "dataVolumeResourceType": "", } ], "taskNodeGroups": [ { "groupName": "task_node_default_group", "nodeNum": 1, "nodeSize": "", "nodeSpecId": "cdc6035a249a40249312f5ef72a23cd7", "vmProductId": "", "vmSpecCode": null, "nodeProductId": "dc970349d128460e960a0c2b826c427c", "rootVolumeSize": 480, "rootVolumeProductId": "16c1dcf0897249758b1ec276d06e0572", "rootVolumeType": "SATA", "rootVolumeResourceSpecCode": "", "rootVolumeResourceType": "", "dataVolumeType": "SATA", "dataVolumeCount": 1, "dataVolumeSize": 600, "dataVolumeProductId": "16c1dcf0897249758b1ec276d06e0572", "dataVolumeResourceSpecCode": "", "dataVolumeResourceType": "", } ], "masterDataVolumeType": "SATA", "masterDataVolumeSize": 600, "masterDataVolumeCount": 1, "coreDataVolumeType": "SATA", "coreDataVolumeSize": 600, "coreDataVolumeCount": 1, } ] }
Error Codes
See Error Codes.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot