Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Situation Awareness
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive

Disk-intensive ECSs

Updated on 2025-01-27 GMT+08:00

Overview

Disk-intensive ECSs are delivered with local disks for high storage bandwidth and IOPS. Disk-intensive ECSs have the following features:

  • They use local disks to provide high sequential read/write performance and low latency, improving file read/write performance.
  • They provide powerful and stable computing capabilities, ensuring efficient data processing.
  • They provide high intranet performance, including high intranet bandwidth and packets per second (PPS), meeting requirements for data exchange between ECSs during peak hours.

D6 ECSs, with a vCPU/memory ratio of 1:4, use 2nd Generation Intel® Xeon® Scalable processors to offer powerful and stable computing performance. Equipped with proprietary 25GE high-speed intelligent NICs and local SATA disks, D6 ECSs offer ultra-high network bandwidth, PPS, and local storage. The capacity of a single SATA disk is up to 7.4 TB, and an ECS can have up to 36 such disks attached.

D3 ECSs use Intel® Xeon® Scalable processors to offer powerful and stable computing performance. Equipped with proprietary 25GE high-speed intelligent NICs and local SAS disks, D3 ECSs offer ultra-high network bandwidth, PPS, and local storage.

D2 ECSs are KVM-based. They use local storage for high storage performance and intranet bandwidth.

Application Scenario

  • Applications: Massively parallel processing (MPP) database, MapReduce and Hadoop distributed computing, and big data computing
  • Features: Suitable for applications that require large volumes of data to process, high I/O performance, and rapid data switching and processing.
  • Application scenarios: Distributed file systems, network file systems, and logs and data processing applications

Specifications

Table 1 D6 ECS specifications

Flavor

vCPUs

Memory

(GiB)

Max./Assured Bandwidth

(Gbit/s)

Max. PPS

(10,000)

Max. NIC Queues

Max. NICs

Local Disk

(TB)

Virtualization

d6.xlarge.4

4

16

5/2

60

2

3

2 × 7.4

KVM

d6.2xlarge.4

8

32

10/4

120

4

4

4 × 7.4

KVM

d6.4xlarge.4

16

64

20/7.5

240

8

8

8 × 7.4

KVM

d6.6xlarge.4

24

96

25/11

350

8

8

12 × 7.4

KVM

d6.8xlarge.4

32

128

30/15

450

16

8

16 × 7.4

KVM

d6.12xlarge.4

48

192

40/22

650

16

8

24 × 7.4

KVM

d6.16xlarge.4

64

256

42/30

850

32

8

32 × 7.4

KVM

d6.18xlarge.4

72

288

44/34

900

32

8

36 × 7.4

KVM

Table 2 D3 ECS specifications

Flavor

vCPUs

Memory

(GiB)

Max./Assured Bandwidth

(Gbit/s)

Max. PPS

(10,000)

Max. NIC Queues

Max. NICs

Local Disks

(GiB)

Virtualization

d3.xlarge.8

4

32

2.5/2.5

50

2

3

2 × 1,675

KVM

d3.2xlarge.8

8

64

5/5

100

2

4

4 × 1,675

KVM

d3.4xlarge.8

16

128

10/10

120

4

8

8 × 1,675

KVM

d3.6xlarge.8

24

192

15/15

160

6

8

12 × 1,675

KVM

d3.8xlarge.8

32

256

20/20

200

8

8

16 × 1675

KVM

d3.12xlarge.8

48

384

32/32

220

16

8

24 × 1675

KVM

d3.14xlarge.10

56

560

40/40

500

16

8

28 × 1,675

KVM

Table 3 D2 ECS specifications

Flavor

vCPUs

Memory

(GiB)

Max./Assured Bandwidth

(Gbit/s)

Max. PPS

(10,000)

Local Disks

(GiB)

Max. NIC Queues

Virtualization

d2.xlarge.8

4

32

1/1

15

2 × 1,675

2

KVM

d2.2xlarge.8

8

64

2/2

30

4 × 1,675

2

KVM

d2.4xlarge.8

16

128

4/4

40

8 × 1,675

4

KVM

d2.6xlarge.8

24

192

6/6

50

12 × 1,675

6

KVM

d2.8xlarge.8

32

256

8/8

60

16 × 1,675

8

KVM

d2.12xlarge.8

48

384

12/12

90

24 × 1,675

8

KVM

Performance of a Single SATA HDD Disk Attached to a D6 ECS

Table 4 Performance of a single SATA HDD disk attached to a D6 ECS

Metric

Performance

Disk capacity

3,600 GiB

Maximum throughput

198 MBps

Access latency

Within milliseconds

Specifications of a Single SAS HDD Disk Attached to a D3 ECS

Table 5 Specifications of a single SAS HDD disk attached to a D3 ECS

Metric

Performance

Disk capacity

1,675 GiB

Maximum throughput

247 Mbit/s

Access latency

Within milliseconds

Specifications of a Single SAS HDD Disk Attached to a D2 ECS

Table 6 Specifications of a single SAS HDD disk attached to a D2 ECS

Metric

Performance

Disk capacity

1,675 GiB

Maximum throughput

230 Mbit/s

Access latency

Within milliseconds

Notes on Using D6 ECSs

  • Currently, the following operating systems are supported (subject to the information displayed on the console):
    • CentOS 6.8/6.9/7.2/7.3/7.4/7.5/7.6/7.7/7.8/7.9 64bit
    • SUSE Enterprise Linux Server 11 SP3/SP4 64bit
    • SUSE Enterprise Linux Server 12 SP1/SP2/SP3/SP4 64bit
    • Red Hat Enterprise Linux 6.4/6.5/6.6/6.7/6.8/6.9/6.10/7.0/7.1/7.2/7.3/7.4/7.5/7.6/8.0 64bit
    • Windows Server 2008 R2 Enterprise 64bit
    • Windows Server 2012 R2 Standard 64bit
    • Windows Server 2016 Standard 64bit
    • Debian 8.1.0/8.2.0/8.4.0/8.5.0/8.6.0/8.7.0/8.8.0/9.0.0 64bit
    • EulerOS 2.2/2.3/2.5/2.9 64bit
    • Fedora 22/23/24/25/26/27/28 64bit
    • OpenSUSE 13.2/15.0/15.1/42.2/42.3 64bit
    • Ubuntu 20.04 64bit
  • If the host where a D6 ECS is deployed is faulty, the ECS cannot be restored through live migration.
    • If the host is faulty or subhealthy and needs to be repaired, you need to stop the ECS.
    • In case of system maintenance or hardware faults, the ECS will be redeployed (to ensure HA) and cold migrated to another host. The local disk data of the ECS will not be retained.
  • D6 ECSs do not support specifications modification.
  • D6 ECSs do not support local disk snapshots or backups.
  • D6 ECSs can use both local disks and EVS disks to store data. Note the following when using the two types of storage media:
    • Only an EVS disk, not a local disk, can be used as the system disk of a D6 ECS.
    • Both EVS disks and local disks can be used as data disks of a D6 ECS.
    • A maximum of 60 disks (including VBD, SCSI, and local disks) can be attached to a D6 ECS. Among the 60 disks, the maximum number of SCSI disks is 30, and the VBD disks (including the system disk) is 24. For details, see Can I Attach Multiple Disks to an ECS?
      NOTE:

      The maximum number of disks attached to an existing D6 ECS remains unchanged.

  • You can modify the fstab file to set automatic disk mounting at ECS start.
  • The local disk data of a D6 ECS may be lost if an exception occurs, such as physical server breakdown or local disk damage. If your application does not use the data reliability architecture, it is a good practice to use EVS disks to build your ECS.
  • When a D6 ECS is deleted, its local disk data will also be automatically deleted, which can take some time. As a result, a D6 ECS takes a longer time than other ECSs to be deleted. Back up the data before deleting such an ECS.
  • Do not store service data in local disks for a long time. Instead, store it in EVS disks. To improve data security, use a high availability architecture and back up data in a timely manner.
  • Local disks can only be purchased during D6 ECS creation. They cannot be separately purchased after the ECS has been created. The quantity and capacity of your local disks are determined according to the specifications of your ECS.

Notes on Using D3 ECSs

  • Currently, the following operating systems are supported (subject to the information displayed on the console):
    • CentOS 6.8/6.9/7.2/7.3/7.4/7.5/7.6/7.7/7.8/7.9 64bit
    • Red Hat Enterprise Linux 6.9/7.4/7.5/7.7/7.8/8.2 64bit
    • Windows Server 2008 R2 Enterprise 64bit
    • Windows Server 2012 R2 Standard 64bit
    • Windows Server 2016 Standard 64bit
    • Windows 10 Enterprise 64bit
    • Windows Server 1709 64bit
    • Windows Server 1909/2004 SAC 64bit
    • Windows Server 2012 R2 Datacenter 64bit
    • Windows Server 2012 R2 Standard 64bit
    • Windows Server 2016/2019 Standard 64bit
    • SUSE Enterprise Linux Server 12/15 64bit
    • Debian 8.6/8.10/9.0/10 64bit
    • EulerOS 2.5 64bit
    • Fedora 27/28 64bit
    • CoreOS 1298.6 64bit
    • Oracle Linux 6.9/7.3/7.4/7.5 64bit
    • Ubuntu 14.04/16.04/18.04/20.04 64bit
  • If the host where a D3 ECS resides becomes faulty, the ECS cannot be restored through live migration.
    • If the host is faulty or subhealthy, you need to stop the ECS for hardware repair.
    • In case of system maintenance or hardware faults, the ECS will be redeployed (to ensure HA) and cold migrated to another host. The local disk data of the ECS will not be retained.
  • D3 ECSs do not support specifications modification.
  • D3 ECSs do not support local disk snapshots or backups.
  • D3 ECSs can use both local disks and EVS disks to store data. In addition, they can have EVS disks attached to provide a larger storage size. Note the following when using the two types of storage media:
    • Only an EVS disk, not a local disk, can be used as the system disk of a D3 ECS.
    • Both EVS disks and local disks can be used as data disks of a D3 ECS.
    • A maximum of 60 disks (including VBD, SCSI, and local disks) can be attached to a D3 ECS. Among the 60 disks, the maximum number of SCSI disks is 30, and the VBD disks (including the system disk) is 24. For details, see Can I Attach Multiple Disks to an ECS?
      NOTE:

      The maximum number of disks attached to an existing D3 ECS remains unchanged.

  • You can modify the fstab file to set automatic disk mounting at ECS start.
  • The local disk data of a D3 ECS may be lost if an exception occurs, such as physical server breakdown or local disk damage. If your application does not use the data reliability architecture, it is a good practice to use EVS disks to build your ECS.
  • When a D3 ECS is deleted, its local disk data will also be automatically deleted, which can take some time. As a result, a D3 ECS takes a longer time than other ECSs to be deleted. Back up the data before deleting such an ECS.
  • Do not store service data in local disks for a long time. Instead, store it in EVS disks. To improve data security, use a high availability architecture and back up data in a timely manner.
  • Local disks can only be purchased during D3 ECS creation. The quantity and capacity of your local disks are determined according to the specifications of your ECS.

Notes on Using D2 ECSs

  • Currently, the following operating systems are supported (subject to the information displayed on the console):
    • CentOS 6.8/6.9/7.2/7.3/7.4/7.5/7.6/7.7/7.8/7.9 64bit
    • Red Hat Enterprise Linux 6.9/7.4/7.5/7.7/7.8/8.2 64bit
    • Windows 10 Enterprise 64bit
    • Windows Server 1709 64bit
    • Windows Server 1909/2004 SAC 64bit
    • Windows Server 2012 R2 Datacenter 64bit
    • Windows Server 2012 R2 Standard 64bit
    • Windows Server 2016/2019 Standard 64bit
    • SUSE Enterprise Linux Server 12/15 64bit
    • Debian 8.6/8.10/9.0/10 64bit
    • Fedora 27/28 64bit
    • CoreOS 1298.6 64bit
    • Oracle Linux 6.9/7.3/7.4/7.5 64bit
    • Ubuntu 14.04/16.04/18.04/20.04 64bit
  • If the host where a D2 ECS is deployed becomes faulty, the ECS cannot be migrated.
  • To improve network performance, you can set the NIC MTU of a D2 ECS to 8888.
  • D2 ECSs do not support specifications modification.
  • D2 ECSs do not support local disk snapshots or backups.
  • D2 ECSs can use both local disks and EVS disks to store data. In addition, they can have EVS disks attached to provide a larger storage size. Note the following when using the two types of storage media:
    • Only an EVS disk, not a local disk, can be used as the system disk of a D1 ECS.
    • Both EVS disks and local disks can be used as data disks of a D1 ECS.
    • A maximum of 60 disks (including VBD, SCSI, and local disks) can be attached to a D3 ECS. Among the 60 disks, the maximum number of SCSI disks is 30, and the VBD disks (including the system disk) is 24. For details, see Can I Attach Multiple Disks to an ECS?
      NOTE:

      The maximum number of disks attached to an existing D2 ECS remains unchanged.

    • You are advised to use World Wide Names (WWNs), but not drive letters, in applications to perform operations on local disks to prevent drive letter drift (low probability) on Linux. Take local disk attachment as an example:

      If the local disk WWN is wwn-0x50014ee2b14249f6, run the mount /dev/disk/by-id/wwn-0x50014ee2b14249f6 command.

      NOTE:

      How can I view the local disk WWN?

      1. Log in to the ECS.
      2. Run the following command:

        ll /dev/disk/by-id

  • The local disk data of a D2 ECS may be lost if an exception occurs, such as physical server breakdown or local disk damage. If your application does not use the data reliability architecture, it is a good practice to use EVS disks to build your ECS.
  • When a D2 ECS is deleted, its local disk data will also be automatically deleted, which can take some time. As a result, a D2 ECS takes a longer time than other ECSs to be deleted. Back up the data before deleting such an ECS.
  • Do not store long-term service data in local disks. Instead, back up data in a timely manner and use a high availability data architecture. Use EVS disks to store service data that needs to be stored for a long time.
  • Local disks can only be purchased during D2 ECS creation. The quantity and capacity of your local disks are determined according to the specifications of your ECS.
  • The basic resources, including vCPUs, memory, and image of a stopped D2 ECS will continue to be billed. To stop the ECS from being billed, delete it and its associated resources. For details, see Will I Be Billed After ECSs Are Stopped?

Handling Damaged Local Disks Attached to an ECS of D Series

If a local disk attached to an ECS is damaged, perform the following operations to handle this issue:

For a Linux ECS:
  1. Detach the faulty local disk.
    1. Run the following command to query the mount point of the faulty disk:

      df –Th

      Figure 1 Querying the mount point
    2. Run the following command to detach the faulty local disk:

      umount Mount point

      In the example shown in Figure 1, the mount point of /dev/sda1 is /mnt/sda1. Run the following command:

      umount /mnt/sda1

  2. Check whether the mount point of the faulty disk is configured in /etc/fstab of the ECS. If yes, comment out the mount point to prevent the ECS from entering the maintenance mode upon ECS startup after the faulty disk is replaced.
    1. Run the following command to obtain the partition UUID:

      blkid Disk partition

      In this example, run the following command to obtain the UUID of the /dev/sda1 partition:

      blkid /dev/sda1

      Information similar to the following is displayed:

      /dev/sda1: UUID="b9a07b7b-9322-4e05-ab9b-14b8050cd8cc" TYPE="ext4"
    2. Run the following command to check whether /etc/fstab contains the automatic mounting information about the disk partition:

      cat /etc/fstab

      Information similar to the following is displayed:

      UUID=b9a07b7b-9322-4e05-ab9b-14b8050cd8cc    /mnt   ext4    defaults        0 0
    3. If the mounting information exists, perform the following steps to delete it.
      1. Run the following command to edit /etc/fstab:

        vi /etc/fstab

        Use the UUID obtained in 2.a to check whether the mounting information of the local disk is contained in /etc/fstab. If yes, comment out the information. This prevents the ECS from entering the maintenance mode upon ECS startup after the local disk is replaced.

      2. Press i to enter editing mode.
      3. Delete or comment out the automatic mounting information of the disk partition.

        For example, add a pound sign (#) at the beginning of the following command line to comment out the automatic mounting information:

        # UUID=b9a07b7b-9322-4e05-ab9b-14b8050cd8cc    /mnt   ext4    defaults        0 0
      4. Press Esc to exit editing mode. Enter :wq and press Enter to save the settings and exit.
  3. Run the following command to obtain the WWN of the local disk:

    For example, if the sdc disk is faulty, obtain the WWN of the sdc disk.

    ll /dev/disk/by-id/ | grep wwn-

    Figure 2 Querying the WWN of the faulty local disk
  4. Stop the ECS and provide the WWN of the faulty disk to technical support personnel to replace the local disk.

    After the local disk is replaced, restart the ECS to synchronize the new local disk information to the virtualization layer.

For a Windows ECS:

  1. Open Computer Management, choose Computer Management (Local) > Storage > Disk Management, and view the disk ID, for example, Disk 1.
  2. Open Windows PowerShell as an administrator and obtain the serial number of the faulty disk according to the mapping between the disk ID and serial number.

    Get-Disk | select Number, SerialNumber

    Figure 3 Querying the mapping between the disk ID and serial number
  3. Stop the ECS and provide the serial number of the faulty disk to technical support personnel to replace the local disk.

    After the local disk is replaced, restart the ECS to synchronize the new local disk information to the virtualization layer.

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback