Disk-intensive ECSs
Overview
- They use local disks to provide high sequential read/write performance and low latency, improving file read/write performance.
- They provide powerful and stable computing capabilities, ensuring efficient data processing.
- They provide high intranet performance, including robust intranet bandwidth and packets per second (PPS), for data exchange between ECSs during peak hours.
Hyper-threading is enabled for this type of ECSs by default. Each vCPU is a thread of a CPU core.
Available flavors
Available now: D6, D7, D3, and D2
Series |
Compute |
Disk Type |
Network |
---|---|---|---|
D7 |
|
|
|
D6 |
|
|
|
D3 |
|
|
|
D2 |
|
|
|
Disk-intensive D7
Overview
D7 ECSs, with a vCPU/memory ratio of 1:4, use 3rd Generation Intel® Xeon® Scalable processors to offer powerful and stable computing performance. Equipped with 25GE high-speed intelligent NICs and local SATA disks, D7 ECSs offer ultra-high network bandwidth, PPS, and local storage. The capacity of a single SATA disk is up to 3,600 GiB, and an ECS can have up to 32 such disks attached.
Notes
For details, see Notes on Using D7 ECSs.
Scenarios
Disk-intensive D7 ECSs are suitable for applications that need to process large volumes of data and require high I/O performance and rapid data switching and processing, including massively parallel processing (MPP) databases, MapReduce and Hadoop distributed computing, big data computing, distributed file systems, network file systems, and logs and data processing applications.
Specifications
Flavor |
vCPUs |
Memory (GiB) |
Max./Assured Bandwidth (Gbit/s) |
Max. PPS (10,000) |
Network Connections (10,000) |
Max. NIC Queues |
Max. NICs |
Max. Supplementary NICs |
Local Disks (GiB) |
Virtualization |
---|---|---|---|---|---|---|---|---|---|---|
d7.xlarge.4 |
4 |
16 |
5/1.7 |
60 |
50 |
2 |
3 |
32 |
2 × 3,600 |
KVM |
d7.2xlarge.4 |
8 |
32 |
10/3.5 |
120 |
100 |
4 |
4 |
64 |
4 × 3,600 |
KVM |
d7.4xlarge.4 |
16 |
64 |
20/6.7 |
240 |
150 |
4 |
6 |
96 |
8 × 3,600 |
KVM |
d7.6xlarge.4 |
24 |
96 |
25/10 |
350 |
200 |
8 |
8 |
128 |
12 × 3,600 |
KVM |
d7.8xlarge.4 |
32 |
128 |
30/13.5 |
450 |
300 |
8 |
8 |
192 |
16 × 3,600 |
KVM |
d7.12xlarge.4 |
48 |
192 |
40/20 |
650 |
400 |
16 |
8 |
256 |
24 × 3,600 |
KVM |
d7.16xlarge.4 |
64 |
256 |
42/27 |
850 |
500 |
16 |
8 |
256 |
32 × 3,600 |
KVM |
Disk-intensive D6
Overview
D6 ECSs, with a vCPU/memory ratio of 1:4, use 2nd Generation Intel® Xeon® Scalable processors to offer powerful and stable computing performance. Equipped with 25GE high-speed intelligent NICs and local SATA disks, D6 ECSs offer ultra-high network bandwidth, PPS, and local storage. The capacity of a single SATA disk is up to 3,600 GiB, and an ECS can have up to 36 such disks attached.
Notes
For details, see Notes on Using D6 ECSs.
Scenarios
Disk-intensive D6 ECSs are suitable for applications that need to process large volumes of data and require high I/O performance and rapid data switching and processing, including massively parallel processing (MPP) databases, MapReduce and Hadoop distributed computing, and big data computing, distributed file systems, network file systems, and logs and data processing applications.
Specifications
Flavor |
vCPUs |
Memory (GiB) |
Max./Assured Bandwidth (Gbit/s) |
Max. PPS (10,000) |
Network Connections (10,000) |
Max. NIC Queues |
Max. NICs |
Local Disks (GiB) |
Virtualization |
---|---|---|---|---|---|---|---|---|---|
d6.xlarge.4 |
4 |
16 |
5/2 |
60 |
50 |
2 |
3 |
2 × 3,600 |
KVM |
d6.2xlarge.4 |
8 |
32 |
10/4 |
120 |
100 |
4 |
4 |
4 × 3,600 |
KVM |
d6.4xlarge.4 |
16 |
64 |
20/7.5 |
240 |
150 |
8 |
8 |
8 × 3,600 |
KVM |
d6.6xlarge.4 |
24 |
96 |
25/11 |
350 |
200 |
8 |
8 |
12 × 3,600 |
KVM |
d6.8xlarge.4 |
32 |
128 |
30/15 |
450 |
300 |
16 |
8 |
16 × 3,600 |
KVM |
d6.12xlarge.4 |
48 |
192 |
40/22 |
650 |
400 |
16 |
8 |
24 × 3,600 |
KVM |
d6.16xlarge.4 |
64 |
256 |
42/30 |
850 |
500 |
32 |
8 |
32 × 3,600 |
KVM |
d6.18xlarge.4 |
72 |
288 |
44/34 |
900 |
700 |
32 |
8 |
36 × 3,600 |
KVM |
Disk-intensive D3
Overview
D3 ECSs use Intel® Xeon® Scalable processors to offer powerful and stable computing performance. Equipped with proprietary 25GE high-speed intelligent NICs and local SAS disks, D3 ECSs offer ultra-high network bandwidth, PPS, and local storage.
Notes
For details, see Notes on Using D3 ECSs.
Scenarios
Disk-intensive D3 ECSs are suitable for applications that need to process large volumes of data and require high I/O performance and rapid data switching and processing, including massively parallel processing (MPP) databases, MapReduce and Hadoop distributed computing, big data computing, distributed file systems, network file systems, and logs and data processing applications.
Specifications
Flavor |
vCPUs |
Memory (GiB) |
Max./Assured Bandwidth (Gbit/s) |
Max. PPS (10,000) |
Max. NIC Queues |
Max. NICs |
Local Disks (GiB) |
Virtualization |
---|---|---|---|---|---|---|---|---|
d3.xlarge.8 |
4 |
32 |
2.5/2.5 |
50 |
2 |
3 |
2 × 1,675 |
KVM |
d3.2xlarge.8 |
8 |
64 |
5/5 |
100 |
2 |
4 |
4 × 1,675 |
KVM |
d3.4xlarge.8 |
16 |
128 |
10/10 |
120 |
4 |
8 |
8 × 1,675 |
KVM |
d3.6xlarge.8 |
24 |
192 |
15/15 |
160 |
6 |
8 |
12 × 1,675 |
KVM |
d3.8xlarge.8 |
32 |
256 |
20/20 |
200 |
8 |
8 |
16 × 1,675 |
KVM |
d3.12xlarge.8 |
48 |
384 |
32/32 |
220 |
16 |
8 |
24 × 1,675 |
KVM |
d3.14xlarge.10 |
56 |
560 |
40/40 |
500 |
16 |
8 |
28 × 1,675 |
KVM |
Disk-intensive D2
Overview
D2 ECSs are KVM-based. They use local storage for high storage performance and intranet bandwidth.
Notes
For details, see Notes on Using D2 ECSs.
Scenarios
Disk-intensive D2 ECSs are suitable for applications that need to process large volumes of data and require high I/O performance and rapid data switching and processing, including massively parallel processing (MPP) databases, MapReduce and Hadoop distributed computing, big data computing, distributed file systems, network file systems, and logs and data processing applications.
Specifications
Flavor |
vCPUs |
Memory (GiB) |
Max./Assured Bandwidth (Gbit/s) |
Max. PPS (10,000) |
Max. NIC Queues |
Local Disks (GiB) |
Virtualization |
---|---|---|---|---|---|---|---|
d2.xlarge.8 |
4 |
32 |
3/1 |
15 |
2 |
2 × 1,675 |
KVM |
d2.2xlarge.8 |
8 |
64 |
5/2 |
30 |
2 |
4 × 1,675 |
KVM |
d2.4xlarge.8 |
16 |
128 |
8/4 |
40 |
4 |
8 × 1,675 |
KVM |
d2.6xlarge.8 |
24 |
192 |
10/6 |
50 |
6 |
12 × 1,675 |
KVM |
d2.8xlarge.8 |
32 |
256 |
13/8 |
60 |
8 |
16 × 1,675 |
KVM |
d2.12xlarge.8 |
48 |
384 |
13/13 |
90 |
8 |
24 × 1,675 |
KVM |
Performance of a Single SATA HDD Disk Attached to a D7 ECS
Metric |
Performance |
---|---|
Disk capacity |
3,600 GiB |
Maximum throughput |
210 MBps |
Access latency |
Millisecond-level |
Performance of a Single SATA HDD Disk Attached to a D6 ECS
Metric |
Performance |
---|---|
Disk capacity |
3,600 GiB |
Maximum throughput |
198 Mbit/s |
Access latency |
Millisecond-level |
Performance of a Single SAS HDD Disk Attached to a D3 ECS
Metric |
Performance |
---|---|
Disk capacity |
1,675 GiB |
Maximum throughput |
247 Mbit/s |
Access latency |
Millisecond-level |
Performance of a Single SAS HDD Disk Attached to a D2 ECS
Metric |
Performance |
---|---|
Disk capacity |
1,675 GiB |
Maximum throughput |
230 MB/s |
Access latency |
Millisecond-level |
Notes on Using D7 ECSs
- Currently, the following operating systems are supported (subject to the information displayed on the console):
- CentOS 6.3/6.4/6.5/6.6/6.7/6.8/6.9/6.10/7.0/7.1/7.2/7.3/7.4/7.5/7.6/8.0 64bit
- SUSE Enterprise Linux Server 11 SP3/SP4 64bit
- SUSE Enterprise Linux Server 12 SP1/SP2/SP3/SP4 64bit
- Red Hat Enterprise Linux 6.4/6.5/6.6/6.7/6.8/6.9/6.10/7.0/7.1/7.2/7.3/7.4/7.5/7.6/8.0 64bit
- Windows Server 2008 R2 Enterprise 64bit
- Windows Server 2012 R2 Standard 64bit
- Windows Server 2016 Standard 64bit
- Debian 8.1.0/8.2.0/8.4.0/8.5.0/8.6.0/8.7.0/8.8.0/9.0.0 64bit
- EulerOS 2.2/2.3/2.5 64bit
- Fedora 22/23/24/25/26/27/28 64bit
- OpenSUSE 13.2/15.0/15.1/42.2/42.3 64bit
- If the host where a D7 ECS is deployed is faulty, the ECS cannot be restored through live migration.
- If the host is faulty or subhealthy and needs to be repaired, you need to stop the ECS.
- In case of system maintenance or hardware faults, the ECS will be redeployed (to ensure HA) and cold migrated to another host. The local disk data of the ECS will not be retained.
- Specifications cannot be changed.
- D7 ECSs do not support local disk snapshots or backups.
- D7 ECSs can use both local disks and EVS disks to store data. In addition, they can have EVS disks attached to provide a larger storage size. Note the following when using the two types of storage media (local disks and EVS disks):
- Only an EVS disk can be used as the system disk of a D7 ECS.
- Both EVS disks and local disks can be used as data disks of a D7 ECS.
- A maximum of 24 disks (including VBD and local disks) can be attached to a D7 ECS. Among the 24 disks, the maximum number of VBD disks (including the system disk) is 24. For details, see Can I Attach Multiple Disks to an ECS?
The maximum number of disks attached to an existing D7 ECS remains unchanged.
- Modify the fstab file to set automatic disk mounting at ECS start. For details, see Configuring Automatic Mounting at System Start.
- The local disk data of a D7 ECS may be lost if an exception occurs, such as physical server breakdown or local disk damage. If your application does not use the data reliability architecture, it is a good practice to use EVS disks to build your ECS.
- When a D7 ECS is deleted, its local disk data will also be automatically deleted, which can take some time. As a result, a D7 ECS takes a longer time than other ECSs to be deleted. Back up the data before deleting such an ECS.
- Do not store service data in local disks for a long time. Instead, store it in EVS disks. To improve data security, use a high availability architecture and back up data in a timely manner.
- Local disks can only be purchased during ECS creation. They cannot be separately purchased after the ECS has been created. The quantity and capacity of your local disks are determined according to the specifications of your ECS.
Notes on Using D6 ECSs
- Currently, the following operating systems are supported (subject to the information displayed on the console):
- CentOS 6.3/6.4/6.5/6.6/6.7/6.8/6.9/6.10/7.0/7.1/7.2/7.3/7.4/7.5/7.6/8.0 64bit
- SUSE Enterprise Linux Server 11 SP3/SP4 64bit
- SUSE Enterprise Linux Server 12 SP1/SP2/SP3/SP4 64bit
- Red Hat Enterprise Linux 6.4/6.5/6.6/6.7/6.8/6.9/6.10/7.0/7.1/7.2/7.3/7.4/7.5/7.6/8.0 64bit
- Windows Server 2008 R2 Enterprise 64bit
- Windows Server 2012 R2 Standard 64bit
- Windows Server 2016 Standard 64bit
- Debian 8.1.0/8.2.0/8.4.0/8.5.0/8.6.0/8.7.0/8.8.0/9.0.0 64bit
- EulerOS 2.2/2.3/2.5/2.9 64bit
- Fedora 22/23/24/25/26/27/28 64bit
- OpenSUSE 13.2/15.0/15.1/42.2/42.3 64bit
- If the host where a D6 ECS is deployed is faulty, the ECS cannot be restored through live migration.
- If the host is faulty or subhealthy and needs to be repaired, you need to stop the ECS.
- In case of system maintenance or hardware faults, the ECS will be redeployed (to ensure HA) and cold migrated to another host. The local disk data of the ECS will not be retained.
- D6 ECSs do not support specifications modification.
- D6 ECSs do not support local disk snapshots or backups.
- D6 ECSs can use both local disks and EVS disks to store data. Restrictions on using the two types of storage media are as follows:
- Only an EVS disk can be used as the system disk of a D6 ECS.
- Both EVS disks and local disks can be used as data disks of a D6 ECS.
- A maximum of 60 disks (including VBD, SCSI, and local disks) can be attached to a D6 ECS. Among the 60 disks, the maximum number of SCSI disks is 30, and the VBD disks (including the system disk) is 24. For details, see Can I Attach Multiple Disks to an ECS?
The maximum number of disks attached to an existing D6 ECS remains unchanged.
- You can modify the fstab file to set automatic disk mounting at ECS start. For details, see Configuring Automatic Mounting at System Start.
- The local disk data of a D6 ECS may be lost if an exception occurs, such as physical server breakdown or local disk damage. If your application does not use the data reliability architecture, it is a good practice to use EVS disks to build your ECS.
- When a D6 ECS is deleted, its local disk data will also be automatically deleted, which can take some time. As a result, a D6 ECS takes a longer time than other ECSs to be deleted. Back up the data before deleting such an ECS.
- Do not store service data in local disks for a long time. Instead, store it in EVS disks. To improve data security, use a high availability architecture and back up data in a timely manner.
- Local disks can only be purchased during ECS creation. They cannot be separately purchased after the ECS has been created. The quantity and capacity of your local disks are determined according to the specifications of your ECS.
Notes on Using D3 ECSs
- Currently, the following operating systems are supported (subject to the information displayed on the console):
- CentOS 6.3/6.4/6.5/6.6/6.7/6.8/6.9/6.10/7.0/7.1/7.2/7.3/7.4/7.5/7.6/8.0 64bit
- Red Hat Enterprise Linux 6.4/6.5/6.6/6.7/6.8/6.9/6.10/7.0/7.1/7.2/7.3/7.4/7.5/7.6/8.0 64bit
- Windows Server 2008 R2 Enterprise 64bit
- Windows Server 2012 R2 Standard 64bit
- Windows Server 2016 Standard 64bit
- SUSE Enterprise Linux Server 11 SP3/SP4 64bit
- SUSE Enterprise Linux Server 12 SP1/SP2/SP3/SP4 64bit
- Debian 8.1.0/8.2.0/8.4.0/8.5.0/8.6.0/8.7.0/8.8.0/9.0.0 64bit
- EulerOS 2.2/2.3/2.5 64bit
- EulerOS 2.5 64bit
- Fedora 22/23/24/25/26/27/28 64bit
- OpenSUSE 13.2/15.0/15.1/42.2/42.3 64bit
- If the host where a D3 ECS resides becomes faulty, the ECS cannot be restored through live migration.
- If the host is faulty or subhealthy, you need to stop the ECS for hardware repair.
- In case of system maintenance or hardware faults, the ECS will be redeployed (to ensure HA) and cold migrated to another host. The local disk data of the ECS will not be retained.
- D3 ECSs do not support specifications modification.
- D3 ECSs do not support local disk snapshots or backups.
- D3 ECSs can use both local disks and EVS disks to store data. In addition, they can have EVS disks attached to provide a larger storage size. Use restrictions on the two types of storage media are as follows:
- Only an EVS disk, not a local disk, can be used as the system disk of a D3 ECS.
- Both EVS disks and local disks can be used as data disks of a D3 ECS.
- A maximum of 60 disks (including VBD, SCSI, and local disks) can be attached to a D3 ECS. Among the 60 disks, the maximum number of SCSI disks is 30, and the VBD disks (including the system disk) is 24. For details, see Can I Attach Multiple Disks to an ECS?
The maximum number of disks attached to an existing D3 ECS remains unchanged.
- You can modify the fstab file to set automatic disk mounting at ECS start. For details, see Setting Automatic Mounting at System Start.
- The local disk data of a D3 ECS may be lost if an exception occurs, such as physical server breakdown or local disk damage. If your application does not use the data reliability architecture, it is a good practice to use EVS disks to build your ECS.
- When a D3 ECS is deleted, its local disk data will also be automatically deleted, which can take some time. As a result, a D3 ECS takes a longer time than other ECSs to be deleted. Back up the data before deleting such an ECS.
- Do not store service data in local disks for a long time. Instead, store it in EVS disks. To improve data security, use a high availability architecture and back up data in a timely manner.
- Local disks can only be purchased during ECS creation. The quantity and capacity of your local disks are determined according to the specifications of your ECS.
Notes on Using D2 ECSs
- Currently, the following operating systems are supported (subject to the information displayed on the console):
- CentOS 6.7/6.8/7.2/7.3/7.4 64bit
- SUSE Enterprise Linux Server 11 SP3/SP4 64bit
- SUSE Enterprise Linux Server 12 SP1/SP2 64bit
- Red Hat Enterprise Linux 6.8/7.3 64bit
- Windows Server 2008 R2 Enterprise 64bit
- Windows Server 2012 R2 Standard 64bit
- Windows Server 2016 Standard 64bit
- Debian 8.7/9.0.0 64bit
- EulerOS 2.2 64bit
- Fedora 25/26 64bit
- OpenSUSE 42.2/42.3 64bit
- If the host where a D2 ECS resides becomes faulty, the ECS cannot be restored through live migration.
- If the host is faulty or subhealthy, you need to stop the ECS for hardware repair.
- In case of system maintenance or hardware faults, the ECS will be redeployed (to ensure HA) and cold migrated to another host. The local disk data of the ECS will not be retained.
- To improve network performance, you can set the NIC MTU of a D2 ECS to 8888.
- D2 ECSs do not support specifications modification.
- D2 ECSs do not support local disk snapshots or backups.
- D2 ECSs do not support automatic recovery.
- D2 ECSs can use both local disks and EVS disks to store data. In addition, they can have EVS disks attached to provide a larger storage size. Use restrictions on the two types of storage media are as follows:
- Only an EVS disk, not a local disk, can be used as the system disk of a D2 ECS.
- Both EVS disks and local disks can be used as data disks of a D2 ECS.
- A D2 ECS can have a maximum of 60 attached disks (including VBD, SCSI, and local disks). Among the 60 disks, the maximum number of SCSI disks is 30, and the maximum number of VBD disks is 24 (including the system disk). For details about constraints, see Can I Attach Multiple Disks to an ECS?
- You can modify the fstab file to set automatic disk mounting at ECS start. For details, see Setting Automatic Mounting at System Start.
- The basic resources, including vCPUs, memory, and image of a stopped D2 ECS will continue to be billed. To stop the ECS from being billed, delete it and its associated resources.
- The local disk data of a D2 ECS may be lost if an exception occurs, such as physical server breakdown or local disk damage. If your application does not use the data reliability architecture, it is a good practice to use EVS disks to build your ECS.
- When a D2 ECS is deleted, its local disk data will also be automatically deleted, which can take some time. As a result, a D2 ECS takes a longer time than other ECSs to be deleted. Back up the data before deleting such an ECS.
- Do not store service data in local disks for a long time. Instead, store it in EVS disks. To improve data security, use a high availability architecture and back up data in a timely manner.
- Local disks can only be purchased during ECS creation. The quantity and capacity of your local disks are determined according to the specifications of your ECS.
Handling Damaged Local Disks Attached to an ECS of D Series
If a local disk attached to an ECS is damaged, perform the following operations to handle this issue:
- Detach the faulty local disk.
- Run the following command to query the mount point of the faulty disk:
- Run the following command to detach the faulty local disk:
In the example shown in Figure 1, the mount point of /dev/sda1 is /mnt/sda1. Run the following command:
umount /mnt/sda1
- Check whether the mount point of the faulty disk is configured in /etc/fstab of the ECS. If yes, comment out the mount point to prevent the ECS from entering the maintenance mode upon ECS startup after the faulty disk is replaced.
- Run the following command to obtain the partition UUID:
In this example, run the following command to obtain the UUID of the /dev/sda1 partition:
blkid /dev/sda1
Information similar to the following is displayed:
/dev/sda1: UUID="b9a07b7b-9322-4e05-ab9b-14b8050cd8cc" TYPE="ext4"
- Run the following command to check whether /etc/fstab contains the automatic mounting information about the disk partition:
cat /etc/fstab
Information similar to the following is displayed:
UUID=b9a07b7b-9322-4e05-ab9b-14b8050cd8cc /mnt ext4 defaults 0 0
- If the mounting information exists, perform the following steps to delete it.
- Run the following command to edit /etc/fstab:
vi /etc/fstab
Use the UUID obtained in 2.a to check whether the mounting information of the local disk is contained in /etc/fstab. If yes, comment out the information. This prevents the ECS from entering the maintenance mode upon ECS startup after the local disk is replaced.
- Press i to enter editing mode.
- Delete or comment out the automatic mounting information of the disk partition.
For example, add a pound sign (#) at the beginning of the following command line to comment out the automatic mounting information:
# UUID=b9a07b7b-9322-4e05-ab9b-14b8050cd8cc /mnt ext4 defaults 0 0
- Press Esc to exit editing mode. Enter :wq and press Enter to save the settings and exit.
- Run the following command to edit /etc/fstab:
- Run the following command to obtain the partition UUID:
- Run the following command to obtain the WWN of the local disk:
For example, if the sdc disk is faulty, obtain the WWN of the sdc disk.
ll /dev/disk/by-id/ | grep wwn-
Figure 2 Querying the WWN of the faulty local disk
- Stop the ECS and provide the WWN of the faulty disk to technical support personnel to replace the local disk.
After the local disk is replaced, restart the ECS to synchronize the new local disk information to the virtualization layer.
For a Windows ECS:
- Open Computer Management, choose Computer Management (Local) > Storage > Disk Management, and view the disk ID, for example, Disk 1.
- Open Windows PowerShell as an administrator and obtain the serial number of the faulty disk according to the mapping between the disk ID and serial number.
Get-Disk | select Number, SerialNumber
Figure 3 Querying the mapping between the disk ID and serial number
If the serial number cannot be obtained by running the preceding command, see Using a Serial Number to Obtain the Disk Name (Windows).
- Stop the ECS and provide the serial number of the faulty disk to technical support personnel to replace the local disk.
After the local disk is replaced, restart the ECS to synchronize the new local disk information to the virtualization layer.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot