Updated on 2024-09-23 GMT+08:00

MRS Cluster Node Specifications

MRS Node Specifications

MRS supports host specifications determined by CPU, memory, and disk space.

Tenants share physical resources of ECSs, but can exclusively use resources of BMSs. BMSs can better meet your requirements for deploying key applications and services that require high performance (such as big data clusters and enterprise middleware systems) and a secure and reliable running environment. If BMS specifications are used, Master node specifications cannot be scaled up.

MRS supports BMS specifications only when the billing mode of a cluster is Yearly/Monthly.

MRS supports the following hybrid deployment of ECSs and BMSs:

  • Master, Core, and Task nodes are deployed on ECSs.
  • Master and Core nodes are deployed on BMSs, and Task nodes are deployed on ECSs.
  • Master nodes are deployed on either ECSs or BMSs, Core nodes are deployed on either ECSs or BMSs, and Task nodes are deployed on ECSs.

Tenants share physical resources of ECSs, but can exclusively use resources of BMSs. BMSs can better meet your requirements for deploying key applications and services that require high performance (such as big data clusters and enterprise middleware systems) and a secure and reliable running environment.

If BMS specifications are used, Master node specifications cannot be scaled up.

  • More advanced instance specifications provide better data processing. However, they require higher cluster cost.
  • Instance specifications may vary in different AZs. If no instance specifications in the current AZ can meet your requirements, switch to another AZ.
  • If you select HDDs for Core nodes, there is no billing information for data disks. The fees are charged with ECSs.
  • If you select non-HDD disks for Core nodes, the disk types of Master and Core nodes are determined by Data Disk.
  • If Sold out appears next to an instance specification of a node, the node of this specification cannot be bought. You can only buy nodes of other specifications.
  • The Master node specification (4 vCPUs 8 GB) is not within the SLA after-sales scope. It is applicable only to the test environment and is not recommended for the production environment.
  • For MRS 3.x or later, the memory of the master node must be greater than 64 GB.

Disk Roles

Table 1 Disk types of MRS cluster nodes

Disk Role

Description

System disk

Storage type and space of the system disk on a node.

Storage type can be any of the following:

  • SAS: high I/O
  • SSD: ultra-high I/O
  • GPSSD: general-purpose SSD

Data disk

Data disk storage space of a node. For more data storage, you can add disks when creating a cluster. A maximum of 10 disks can be added to each Core or Task node.

  • Data storage and computing are separated. Data is stored in OBS, which features low cost and unlimited storage capacity. The clusters can be deleted at any time in OBS. The computing performance is determined by OBS access performance and is lower than that of HDFS. This configuration is recommended if data computing is infrequent.
  • Data storage and computing are not separated. Data is stored in HDFS, which features high cost, high computing performance, and limited storage capacity. Before deleting clusters, you must export and store the data. This configuration is recommended if data computing is frequent.

The storage type can be any of the following:

  • SAS: high I/O
  • SSD: ultra-high I/O
  • GPSSD: general-purpose SSD
NOTE:

Adding nodes to an MRS cluster requires increasing the disk capacity of the management node (Master node). To ensure stable cluster running, set the disk capacity of the Master node to over 600 GB if the number of nodes is 300 and increase it to over 1 TB if the number of nodes reaches 500.