El contenido no se encuentra disponible en el idioma seleccionado. Estamos trabajando continuamente para agregar más idiomas. Gracias por su apoyo.

Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive

Querying Cluster Details

Updated on 2022-12-08 GMT+08:00

Function

This API is used to query details about a specified cluster. This API is incompatible with Sahara.

URI

  • Format

    GET /v1.1/{project_id}/cluster_infos/{cluster_id}

  • Parameter description
    Table 1 URI parameter description

    Parameter

    Mandatory

    Description

    project_id

    Yes

    Project ID. For details on how to obtain the project ID, see Obtaining a Project ID.

    cluster_id

    Yes

    Cluster ID

Request

Request parameters

None.

Response

Table 2 Response parameter description

Parameter

Type

Description

clusterId

String

Cluster ID

clusterName

String

Cluster name

masterNodeNum

String

Number of Master nodes deployed in a cluster

coreNodeNum

String

Number of Core nodes deployed in a cluster

totalNodeNum

String

Total number of nodes deployed in a cluster

clusterState

String

Cluster status. Valid values include:
  • starting: The cluster is being started.
  • running: The cluster is running.
  • terminated: The cluster has been terminated.
  • failed: The cluster fails.
  • abnormal: The cluster is abnormal.
  • terminating: The cluster is being terminated.
  • frozen: The cluster has been frozen.
  • scaling-out: The cluster is being scaled out.
  • scaling-in: The cluster is being scaled in.

createAt

String

Cluster creation time, which is a 10-bit timestamp

updateAt

String

Cluster update time, which is a 10-bit timestamp

billingType

String

Cluster billing mode

dataCenter

String

Cluster work region

vpc

String

VPC name

vpcId

String

VPC ID

fee

String

Cluster creation fee, which is automatically calculated

hadoopVersion

String

Hadoop version

masterNodeSize

String

Instance specifications of a Master node

coreNodeSize

String

Instance specifications of a Core node

componentList

Array

Component list. For details, see Table 3.

externalIp

String

External IP address

externalAlternateIp

String

Backup external IP address

internalIp

String

Internal IP address

deploymentId

String

Cluster deployment ID

remark

String

Cluster remarks

orderId

String

Cluster creation order ID

azId

String

AZ ID

masterNodeProductId

String

Product ID of a Master node

masterNodeSpecId

String

Specification ID of a Master node

coreNodeProductId

String

Product ID of a Core node

coreNodeSpecId

String

Specification ID of a Core node

azName

String

AZ name

instanceId

String

Instance ID

vnc

String

URI for remotely logging in to an ECS

tenantId

String

Project ID

volumeSize

Integer

Disk storage space

subnetId

String

Subnet ID

subnetName

String

Subnet name

securityGroupsId

String

Security group ID

slaveSecurityGroupsId

String

Security group ID of a non-Master node. Currently, one MRS cluster uses only one security group. Therefore, this field has been discarded. This field returns the same value as securityGroupsId does for compatibility consideration.

bootstrap_scripts

Array

Bootstrap action script information. For more parameter description, see Table 5.

stageDesc

String

Cluster operation progress description.

The cluster installation progress includes:
  • Verifying cluster parameters: Cluster parameters are being verified.
  • Applying for cluster resources: Cluster resources are being applied for.
  • Creating VMs: The VMs are being created.
  • Initializing VMs: The VMs are being initialized.
  • Installing MRS Manager: MRS Manager is being installed.
  • Deploying the cluster: The cluster is being deployed.
  • Cluster installation failed: Failed to install the cluster.
The cluster scale-out progress includes:
  • Preparing for scale-out: Cluster scale-out is being prepared.
  • Creating VMs: The VMs are being created.
  • Initializing VMs: The VMs are being initialized.
  • Adding nodes to the cluster: The nodes are being added to the cluster.
  • Scale-out failed: Failed to scale out the cluster.
The cluster scale-in progress includes:
  • Preparing for scale-in: Cluster scale-in is being prepared.
  • Decommissioning instance: The instance is being decommissioned.
  • Deleting VMs: The VMs are being deleted.
  • Deleting nodes from the cluster: The nodes are being deleted from the cluster.
  • Scale-in failed: Failed to scale in the cluster.

If the cluster installation, scale-out, or scale-in fails, stageDesc will display the failure cause. For details, see Table 8.

isMrsManagerFinish

Boolean

Whether MRS Manager installation is finished during cluster creation.

  • true: MRS Manager installation is finished.
  • false: MRS Manager installation is not finished.

safeMode

Integer

Running mode of an MRS cluster

  • 0: Normal cluster
  • 1: Security cluster

clusterVersion

String

Cluster version

nodePublicCertName

String

Name of the public key file

masterNodeIp

String

IP address of a Master node

privateIpFirst

String

Preferred private IP address

errorInfo

String

Error message

tags

String

Tag information

chargingStartTime

String

Start time of billing

clusterType

Integer

Cluster type

logCollection

Integer

Whether to collect logs when cluster installation fails

  • 0: Do not collect.
  • 1: Collect.

taskNodeGroups

List<NodeGroup>

List of Task nodes. For more parameter description, see Table 4.

nodeGroups

List<NodeGroup>

List of Master, Core and Task nodes. For more parameter description,

see Table 4.

masterDataVolumeType

String

Data disk storage type of the Master node. Currently, SATA, SAS, and SSD are supported.

masterDataVolumeSize

Integer

Data disk storage space of the Master node. To increase data storage capacity, you can add disks at the same time when creating a cluster.

Value range: 100 GB to 32,000 GB

masterDataVolumeCount

Integer

Number of data disks of the Master node.

The value can be set to 1 only.

coreDataVolumeType

String

Data disk storage type of the Core node. Currently, SATA, SAS, and SSD are supported.

coreDataVolumeSize

Integer

Data disk storage space of the Core node. To increase data storage capacity, you can add disks at the same time when creating a cluster.

Value range: 100 GB to 32,000 GB

coreDataVolumeCount

Integer

Number of data disks of the Core node.

Value range: 1 to 10

scale

String

Node change status. If this parameter is left blank, the cluster nodes are not changed.

Possible values are as follows:

  • scaling-out: The cluster is being scaled out.
  • scaling-in: The cluster is being scaled in.
  • scaling-error: The cluster is in the running state and fails to be scaled in or out or the specifications fail to be scaled up for the last time.
  • scaling-up: The Master node specifications are being scaled up.
  • scaling_up_first: The standby Master node specifications are being scaled up.
  • scaled_up_first: The standby Master node specifications have been scaled up successfully.
  • scaled-up-success: The Master node specifications have been scaled up successfully.
Table 3 componentList parameter description

Parameter

Type

Description

componentId

String

Component ID

For example, the component_id of Hadoop is MRS 3.1.0_001.

For example, component_id of Hadoop is MRS 2.1.1_001.

componentName

String

Component name

componentVersion

String

Component version

componentDesc

String

Component description

Table 4 NodeGroup parameter description

Parameter

Type

Description

groupName

String

Node group name.

nodeNum

Integer

Number of nodes. The value ranges from 0 to 500. The minimum number of Master and Core nodes is 1 and the total number of Core and Task nodes cannot exceed 500.

nodeSize

String

Instance specifications of a node.

nodeSpecId

String

Instance specification ID of a node

nodeProductId

String

Instance product ID of a node

vmProductId

String

VM product ID of a node

vmSpecCode

String

VM specifications of a node

rootVolumeSize

Integer

System disk size of a node. This parameter is not configurable and its default value is 40 GB.

rootVolumeProductId

String

System disk product ID of a node

rootVolumeType

String

System disk type of a node

rootVolumeResourceSpecCode

String

System disk product specifications of a node

rootVolumeResourceType

String

System disk product type of a node

dataVolumeType

String

Data disk storage type of a node. Currently, SATA, SAS, and SSD are supported.

  • SATA: Common I/O
  • SAS: High I/O
  • SSD: Ultra-high I/O

dataVolumeCount

Integer

Number of data disks of a node.

dataVolumeSize

Integer

Data disk storage space of a node.

dataVolumeProductId

String

Data disk product ID of a node

dataVolumeResourceSpecCode

String

Data disk product specifications of a node

dataVolumeResourceType

String

Data disk product type of a node

Table 5 bootstrap_scripts parameter description

Parameter

Type

Description

name

String

Name of a bootstrap action script. It must be unique in a cluster.

The value can contain only digits, letters, spaces, hyphens (-), and underscores (_) and cannot start with a space.

The value can contain 1 to 64 characters.

uri

String

Path of the shell script. Set this parameter to an OBS bucket path or a local VM path.

  • OBS bucket path: Enter a script path manually. For example, enter the path of the public sample script provided by MRS. Example: s3a://bootstrap/presto/presto-install.sh. If dualroles is installed, the parameter of the presto-install.sh script is dualroles. If worker is installed, the parameter of the presto-install.sh script is worker. Based on the Presto usage habit, you are advised to install dualroles on the active Master nodes and worker on the Core nodes.
  • Local VM path: Enter a script path. The script path must start with a slash (/) and end with .sh.

parameters

String

Bootstrap action script parameters

nodes

Array String

Type of a node where the bootstrap action script is executed. The value can be Master, Core, or Task.

active_master

Boolean

Whether the bootstrap action script runs only on active Master nodes.

The default value is false, indicating that the bootstrap action script can run on all Master nodes.

before_component_start

Boolean

Time when the bootstrap action script is executed. Currently, the following two options are available: Before component start and After component start

The default value is false, indicating that the bootstrap action script is executed after the component is started.

fail_action

String

Whether to continue executing subsequent scripts and creating a cluster after the bootstrap action script fails to be executed.

  • continue: Continue to execute subsequent scripts.
  • errorout: Stop the action.
The default value is errorout, indicating that the action is stopped.
NOTE:

You are advised to set this parameter to continue in the commissioning phase so that the cluster can continue to be installed and started no matter whether the bootstrap action is successful.

start_time

Long

Execution time of one boot operation script.

state

String

Running state of one bootstrap action script

  • PENDING
  • IN_PROGRESS
  • SUCCESS
  • FAILURE

Example

  • Example request

    None.

  • Example response
    {
        "cluster":{
            "clusterId":"bdb064ff-2855-4624-90d5-e9a6376abd6e",
            "clusterName":"c17022001",
            "masterNodeNum":"2",
            "coreNodeNum":"3",
            "clusterState":"scaling-in",
            "stageDesc": null,
            "createAt":"1487570757",
            "updateAt":"1487668974",
            "billingType":"Metered",
            "dataCenter":"my-kualalumpur-1",
            "vpc": "vpc-autotest",        
            "vpcId": "e2978efd-ca12-4058-9332-1ca0bfbab592",        
            "duration":"0",
            "fee":"0",
            "hadoopVersion":"",
            "masterNodeSize":"s3.2xlarge.2.linux.bigdata",
            "coreNodeSize":"s3.2xlarge.2.linux.bigdata",
              "componentList": [{
    			"id": null,
    			"componentId": "MRS 3.1.0_001",
    			"componentName": "Hadoop",
    			"componentVersion": "3.1.1",
    			"external_datasources": null,
    			"componentDesc": "A distributed data processing framework for big data sets",
    			"componentDescEn": null
    		},
    		{
    			"id": null,
    			"componentId": "MRS 3.1.0_002",
    			"componentName": "HBase",
    			"componentVersion": "2.2.3",
    			"external_datasources": null,
    			"componentDesc": "HBase is a column-based distributed storage system that features high reliability, performance, and scalability",
    			"componentDescEn": null
    		},
    		{
    			"id": null,
    			"componentId": "MRS 3.1.0_003",
    			"componentName": "Hive",
    			"componentVersion": "3.1.0",
    			"external_datasources": null,
    			"componentDesc": "A data warehouse software that facilitates query and management of big data sets stored in distributed storage systems"
    			"componentDescEn": null
    		},
    		{
    			"id": null,
    			"componentId": "MRS 3.1.0_004",
    			"componentName": "Spark2x",
    			"componentVersion": "2.4.5",
    			"external_datasources": null,
    			"componentDesc": "Spark2x is a fast general-purpose engine for large-scale data processing. It is developed based on the open-source Spark2.x version.",
    			"componentDescEn": null
    		},
    		{
    			"id": null,
    			"componentId": "MRS 3.1.0_005",
    			"componentName": "Tez",
    			"componentVersion": "0.9.2",
    			"external_datasources": null,
    			"componentDesc": "An application framework which allows for a complex directed-acyclic-graph of tasks for processing data.",
    			"componentDescEn": null
    		},
    		{
    			"id": null,
    			"componentId": "MRS 3.1.0_006",
    			"componentName": "Flink",
    			"componentVersion": "1.12.0",
    			"external_datasources": null,
    			"componentDesc": "Flink is an open-source message processing system that integrates streams in batches.",
    			"componentDescEn": null
    		},
    		{
    			"id": null,
    			"componentId": "MRS 3.1.0_008",
    			"componentName": "Kafka",
    			"componentVersion": "2.11-2.4.0",
    			"external_datasources": null,
    			"componentDesc": "Kafka is a distributed message release and subscription system.",
    			"componentDescEn": null
    		},
    		{
    			"id": null,
    			"componentId": "MRS 3.1.0_009",
    			"componentName": "Flume",
    			"componentVersion": "1.9.0",
    			"external_datasources": null,
    			"componentDesc": "Flume is a distributed, reliable, and highly available service for efficiently collecting, aggregating, and moving large amounts of log data",
    			"componentDescEn": null
    		},
    		{
    			"id": null,
    			"componentId": "MRS 3.1.0_013",
    			"componentName": "Loader",
    			"componentVersion": "1.99.3",
    			"external_datasources": null,
    			"componentDesc": "Loader is a tool designed for efficiently transmitting a large amount of data between Apache Hadoop and structured databases (such as relational databases).",
    			"componentDescEn": null
    		},
    		{
    			"id": null,
    			"componentId": "MRS 3.1.0_014",
    			"componentName": "Hue",
    			"componentVersion": "4.7.0",
    			"external_datasources": null,
    			"componentDesc": "Apache Hadoop UI",
    			"componentDescEn": null
    		},
    		{
    			"id": null,
    			"componentId": "MRS 3.1.0_015",
    			"componentName": "Oozie",
    			"componentVersion": "5.1.0",
    			"external_datasources": null,
    			"componentDesc": "A Hadoop job scheduling system",
    			"componentDescEn": null
    		},
    		{
    			"id": null,
    			"componentId": "MRS 3.1.0_022",
    			"componentName": "Ranger",
    			"componentVersion": "2.0.0",
    			"external_datasources": null,
    			"componentDesc": "Ranger is a centralized framework based on the Hadoop platform. It provides permission control interfaces such as monitoring, operation, and management interfaces for complex data.",
    			"componentDescEn": null
    		}],
            "externalIp":"100.XXX.XXX.XXX",
            "externalAlternateIp":"100.XXX.XXX.XXX",
            "internalIp":"192.XXX.XXX.XXX",
            "deploymentId":"4ac46ca7-a488-4b91-82c2-e4d7aa9c40c2",
            "remark":"",
            "orderId":"null",
            "azId":"null",
            "masterNodeProductId":"b35cf2d2348a445ca74b32289a160882",
            "masterNodeSpecId":"8ab05e503b4c42abb304e2489560063b",
            "coreNodeProductId":"dc970349d128460e960a0c2b826c427c",
            "coreNodeSpecId":"cdc6035a249a40249312f5ef72a23cd7",
            "azName":"my-kualalumpur-1a",
            "instanceId":"4ac46ca7-a488-4b91-82c2-e4d7aa9c40c2",
            "vnc":null,
            "tenantId":"3f99e3319a8943ceb15c584f3325d064",
            "volumeSize":600,
            "volumeType":"SATA",
            "subnetId": "6b96eec3-4f8d-4c83-93e2-6ec625001d7c",
            "subnetName":"subnet-ftest",
            "securityGroupsId":"930e34e2-195d-401f-af07-0b64ea6603f8",
            "slaveSecurityGroupsId":"2ef3343e-3477-4a0d-80fe-4d874e4f81b8",
            "stageDesc": "Installing MRS Manager",
            "mrsManagerFinish": false, 
            "safeMode":1,
            "clusterVersion":"MRS 3.1.0",
            "nodePublicCertName":"myp",
            "masterNodeIp":"192.XXX.XXX.XXX",
            "privateIpFirst":"192.XXX.XXX.XXX",
            "errorInfo":null,
            "tags":"k1=v1,k2=v2,k3=v3",
            "clusterType": 0,
            "logCollection": 1,
             "nodeGroups": [ 
                { 
                     "groupName": "master_node_default_group", 
                     "nodeNum": 1, 
                     "nodeSize": "s3.xlarge.2.linux.bigdata", 
                     "nodeSpecId": "cdc6035a249a40249312f5ef72a23cd7", 
                     "vmProductId": "", 
                     "vmSpecCode": null, 
                     "nodeProductId": "dc970349d128460e960a0c2b826c427c", 
                     "rootVolumeSize": 480, 
                     "rootVolumeProductId": "16c1dcf0897249758b1ec276d06e0572", 
                     "rootVolumeType": "SATA", 
                     "rootVolumeResourceSpecCode": "", 
                     "rootVolumeResourceType": "", 
                     "dataVolumeType": "SATA", 
                     "dataVolumeCount": 1, 
                     "dataVolumeSize": 600, 
                     "dataVolumeProductId": "16c1dcf0897249758b1ec276d06e0572", 
                     "dataVolumeResourceSpecCode": "", 
                     "dataVolumeResourceType": ""
                   },
                   { 
                     "groupName": "core_node_analysis_group", 
                     "nodeNum": 1, 
                     "nodeSize": "s3.xlarge.2.linux.bigdata", 
                     "nodeSpecId": "cdc6035a249a40249312f5ef72a23cd7", 
                     "vmProductId": "", 
                     "vmSpecCode": null, 
                     "nodeProductId": "dc970349d128460e960a0c2b826c427c", 
                     "rootVolumeSize": 480, 
                     "rootVolumeProductId": "16c1dcf0897249758b1ec276d06e0572", 
                     "rootVolumeType": "SATA", 
                     "rootVolumeResourceSpecCode": "", 
                     "rootVolumeResourceType": "", 
                     "dataVolumeType": "SATA", 
                     "dataVolumeCount": 1, 
                     "dataVolumeSize": 600, 
                     "dataVolumeProductId": "16c1dcf0897249758b1ec276d06e0572", 
                     "dataVolumeResourceSpecCode": "", 
                     "dataVolumeResourceType": ""
                   },
                   { 
                     "groupName": "task_node_analysis_group", 
                     "nodeNum": 1, 
                     "nodeSize": "s3.xlarge.2.linux.bigdata", 
                     "nodeSpecId": "cdc6035a249a40249312f5ef72a23cd7", 
                     "vmProductId": "", 
                     "vmSpecCode": null, 
                     "nodeProductId": "dc970349d128460e960a0c2b826c427c", 
                     "rootVolumeSize": 480, 
                     "rootVolumeProductId": "16c1dcf0897249758b1ec276d06e0572", 
                     "rootVolumeType": "SATA", 
                     "rootVolumeResourceSpecCode": "", 
                     "rootVolumeResourceType": "", 
                     "dataVolumeType": "SATA", 
                     "dataVolumeCount": 1, 
                     "dataVolumeSize": 600, 
                     "dataVolumeProductId": "16c1dcf0897249758b1ec276d06e0572", 
                     "dataVolumeResourceSpecCode": "", 
                     "dataVolumeResourceType": ""
                   } 
    
                ],
            "taskNodeGroups": [
                {
                   "groupName": "task_node_default_group",
                   "nodeNum": 1,
                   "nodeSize": "s3.xlarge.2.linux.bigdata",
                   "nodeSpecId": "cdc6035a249a40249312f5ef72a23cd7",
                   "vmProductId": "",
                   "vmSpecCode": null,
                   "nodeProductId": "dc970349d128460e960a0c2b826c427c",
                   "rootVolumeSize": 480,
                   "rootVolumeProductId": "16c1dcf0897249758b1ec276d06e0572",
                   "rootVolumeType": "SATA",
                   "rootVolumeResourceSpecCode": "",
                   "rootVolumeResourceType": "",
                   "dataVolumeType": "SATA",
                   "dataVolumeCount": 1,
                   "dataVolumeSize": 600,
                   "dataVolumeProductId": "16c1dcf0897249758b1ec276d06e0572",
                   "dataVolumeResourceSpecCode": "",
                   "dataVolumeResourceType": "",
                   "AutoScalingPolicy": null
                   }
                ],
             "masterDataVolumeType": "SATA",
             "masterDataVolumeSize": 600,
             "masterDataVolumeCount": 1,
             "coreDataVolumeType": "SATA",
             "coreDataVolumeSize": 600,
             "coreDataVolumeCount": 1,
          }
      }

Status Code

Table 6 describes the status code of this API.

Table 6 Status code

Status Code

Description

200

Cluster details have been queried successfully.

For the description about error status codes, see Status Codes.

Utilizamos cookies para mejorar nuestro sitio y tu experiencia. Al continuar navegando en nuestro sitio, tú aceptas nuestra política de cookies. Descubre más

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback