Ultra-high I/O ECSs
Overview
Ultra-high I/O ECSs use high-performance local NVMe SSDs to provide high storage input/output operations per second (IOPS) and low read/write latency. You can create such ECSs on the management console.
Hyper-threading is enabled for this type of ECS by default. Each vCPU is a thread of a CPU core.
Available now: D7i, Ir7, I7, aI7, I7n, Ir7n, Ir3, and I3
Series |
Compute |
Disk Type |
Network |
---|---|---|---|
D7i |
|
|
|
Ir7 |
|
|
|
I7 |
|
|
|
aI7 |
|
|
|
Ir7n |
|
|
|
I7n |
|
|
|
Ir3 |
|
|
|
I3 |
|
|
|
Ultra-high I/O D7i
Overview
D7i ECSs use the 3rd Generation Intel® Xeon® Scalable processor and large-capacity high-performance local NVMe SSDs to provide high storage IOPS and low read/write latency.
Notes
For details, see Notes.
Scenarios
- High-performance relational databases
- NoSQL databases (such as Cassandra and MongoDB)
- ElasticSearch
Specifications
Flavor |
vCPUs |
Memory (GiB) |
Max./Assured Network Bandwidth (Gbit/s) |
Max. Network PPS (10,000) |
Network Connections (10,000) |
Max. NIC Queues |
Max. NICs |
Max. Supplementary NICs |
Local Disks (GiB) |
Virtualization |
---|---|---|---|---|---|---|---|---|---|---|
d7i.2xlarge.4 |
8 |
32 |
10/3 |
120 |
100 |
4 |
4 |
64 |
1 × 15,360 GiB NVMe |
KVM |
d7i.4xlarge.4 |
16 |
64 |
15/6 |
200 |
150 |
4 |
6 |
96 |
2 × 15,360 GiB NVMe |
KVM |
d7i.8xlarge.4 |
32 |
128 |
25/12 |
400 |
300 |
8 |
8 |
192 |
4 × 15,360 GiB NVMe |
KVM |
d7i.12xlarge.4 |
48 |
192 |
30/18 |
500 |
400 |
16 |
8 |
256 |
6 × 15,360 GiB NVMe |
KVM |
d7i.16xlarge.4 |
64 |
256 |
35/24 |
600 |
500 |
16 |
8 |
256 |
8 × 15,360 GiB NVMe |
KVM |
d7i.24xlarge.4 |
96 |
384 |
44/36 |
800 |
800 |
32 |
8 |
256 |
12 × 15,360 GiB NVMe |
KVM |
Ultra-high I/O Ir7
Overview
Ir7 ECSs use the 3rd Generation Intel® Xeon® Scalable processor and two small-capacity high-performance local NVMe SSDs to provide high storage IOPS and low read/write latency.
Notes
Ir7 ECSs are only sold in Chinese mainland regions.
For details, see Notes.
Scenarios
- High-performance relational databases
- NoSQL databases (such as Cassandra and MongoDB)
- ElasticSearch
Specifications
Flavor |
vCPUs |
Memory (GiB) |
Max./Assured Network Bandwidth (Gbit/s) |
Max. Network PPS (10,000) |
Network Connections (10,000) |
Max. NIC Queues |
Max. NICs |
Local Disks (GiB) |
Virtualization |
---|---|---|---|---|---|---|---|---|---|
ir7.large.4 |
2 |
8 |
3/0.8 |
40 |
50 |
2 |
3 |
2 × 50 |
KVM |
ir7.xlarge.4 |
4 |
16 |
6/1.5 |
80 |
50 |
2 |
3 |
2 × 100 |
KVM |
ir7.2xlarge.4 |
8 |
32 |
15/3.1 |
150 |
100 |
4 |
4 |
2 × 200 |
KVM |
ir7.4xlarge.4 |
16 |
64 |
20/6.2 |
300 |
150 |
4 |
6 |
2 × 400 |
KVM |
ir7.8xlarge.4 |
32 |
128 |
30/12 |
400 |
300 |
8 |
8 |
2 × 800 |
KVM |
ir7.16xlarge.4 |
64 |
256 |
40/25 |
600 |
500 |
16 |
8 |
2 × 1,600 |
KVM |
Ultra-high I/O I7
Overview
I7 ECSs use the 3rd Generation Intel® Xeon® Scalable processor and large-capacity high-performance local NVMe SSDs to provide high storage IOPS and low read/write latency.
Notes
I7 ECSs are only sold in Chinese mainland regions.
For details, see Notes.
Scenarios
- High-performance relational databases
- NoSQL databases (such as Cassandra and MongoDB) and ElasticSearch
Specifications
Flavor |
vCPUs |
Memory (GiB) |
Max./Assured Network Bandwidth (Gbit/s) |
Max. Network PPS (10,000) |
Network Connections (10,000) |
Max. NIC Queues |
Max. NICs |
Max. Supplementary NICs |
Local Disks (GiB) |
Virtualization |
---|---|---|---|---|---|---|---|---|---|---|
i7.2xlarge.4 |
8 |
32 |
10/3 |
120 |
100 |
4 |
4 |
64 |
1 × 1,600 GiB NVMe |
KVM |
i7.4xlarge.4 |
16 |
64 |
15/6 |
200 |
150 |
4 |
6 |
96 |
2 × 1,600 GiB NVMe |
KVM |
i7.8xlarge.4 |
32 |
128 |
25/12 |
400 |
300 |
8 |
8 |
192 |
4 × 1,600 GiB NVMe |
KVM |
i7.12xlarge.4 |
48 |
192 |
30/18 |
500 |
400 |
16 |
8 |
256 |
6 × 1,600 GiB NVMe |
KVM |
i7.16xlarge.4 |
64 |
256 |
35/24 |
600 |
500 |
16 |
8 |
256 |
8 × 1,600 GiB NVMe |
KVM |
i7.24xlarge.4 |
96 |
384 |
44/36 |
800 |
800 |
32 |
8 |
256 |
12 × 1,600 GiB NVMe |
KVM |
i7.2xlarge.8 |
8 |
64 |
10/3 |
120 |
100 |
4 |
4 |
64 |
1 × 1,600 GiB NVMe |
KVM |
i7.4xlarge.8 |
16 |
128 |
15/6 |
200 |
150 |
4 |
6 |
96 |
2 × 1,600 GiB NVMe |
KVM |
i7.8xlarge.8 |
32 |
256 |
25/12 |
400 |
300 |
8 |
8 |
192 |
4 × 1,600 GiB NVMe |
KVM |
i7.12xlarge.8 |
48 |
384 |
30/18 |
500 |
400 |
16 |
8 |
256 |
6 × 1,600 GiB NVMe |
KVM |
i7.16xlarge.8 |
64 |
512 |
35/24 |
600 |
500 |
16 |
8 |
256 |
8 × 1,600 GiB NVMe |
KVM |
i7.24xlarge.8 |
96 |
768 |
44/36 |
800 |
800 |
32 |
8 |
256 |
12 × 1,600 GiB NVMe |
KVM |
Ultra-high I/O aI7
Overview
aI7 ECSs use the next-generation scalable processor and large-capacity high-performance local NVMe SSDs to provide high storage IOPS and low read/write latency.
Notes
For details, see Notes.
Scenarios
- High-performance relational databases
- NoSQL databases (such as Cassandra and MongoDB) and ElasticSearch
Specifications
Flavor |
vCPUs |
Memory (GiB) |
Max./Assured Network Bandwidth (Gbit/s) |
Max. Network PPS (10,000) |
Max. NIC Queues |
Max. NICs |
Max. Supplementary NICs |
Local Disks (GiB) |
Virtualization |
---|---|---|---|---|---|---|---|---|---|
ai7.2xlarge.8 |
8 |
64 |
4/2.5 |
100 |
8 |
8 |
64 |
1 × 1,600 GiB NVMe |
KVM |
ai7.4xlarge.8 |
16 |
128 |
8/5 |
200 |
16 |
8 |
128 |
2 × 1,600 GiB NVMe |
KVM |
ai7.8xlarge.8 |
32 |
256 |
15/8 |
300 |
16 |
8 |
256 |
4 × 1,600 GiB NVMe |
KVM |
ai7.12xlarge.8 |
48 |
384 |
22/12 |
400 |
16 |
8 |
256 |
6 × 1,600 GiB NVMe |
KVM |
ai7.16xlarge.8 |
64 |
512 |
28/16 |
550 |
24 |
12 |
256 |
8 × 1,600 GiB NVMe |
KVM |
ai7.24xlarge.8 |
96 |
768 |
40/25 |
800 |
24 |
12 |
256 |
12 × 1,600 GiB NVMe |
KVM |
Ultra-high I/O Ir7n
Overview
Ir7n ECSs use the 3rd Generation Intel® Xeon® Scalable processors to offer powerful and stable computing performance, 25GE high-speed intelligent NICs to support ultra-high network bandwidth and PPS, and high-performance local NVMe SSDs to provide high storage IOPS and low read/write latency.
Notes
For details, see Notes.
Scenarios
- High-performance relational databases
- NoSQL databases (such as Cassandra and MongoDB)
- ElasticSearch
Specifications
Flavor |
vCPUs |
Memory (GiB) |
Max./Assured Network Bandwidth (Gbit/s) |
Max. Network PPS (10,000) |
Network Connections (10,000) |
Max. NIC Queues |
Max. NICs |
Max. Supplementary NICs |
Local Disks (GiB) |
Virtualization |
---|---|---|---|---|---|---|---|---|---|---|
ir7n.large.4 |
2 |
8 |
3/0.9 |
40 |
50 |
2 |
3 |
32 |
2 × 50 |
KVM |
ir7n.xlarge.4 |
4 |
16 |
6/1.8 |
80 |
50 |
2 |
3 |
32 |
2 × 100 |
KVM |
ir7n.2xlarge.4 |
8 |
32 |
15/3.6 |
150 |
100 |
4 |
4 |
64 |
2 × 200 |
KVM |
ir7n.4xlarge.4 |
16 |
64 |
20/7.3 |
300 |
150 |
4 |
6 |
96 |
2 × 400 |
KVM |
ir7n.8xlarge.4 |
32 |
128 |
30/14.5 |
400 |
300 |
8 |
8 |
192 |
2 × 800 |
KVM |
ir7n.16xlarge.4 |
64 |
256 |
40/29 |
600 |
500 |
16 |
8 |
256 |
2 × 1,600 |
KVM |
Ultra-high I/O I7n
Overview
I7n ECSs use the 3rd Generation Intel® Xeon® Scalable processors and high-performance local NVMe SSDs to provide high storage IOPS and low read/write latency.
Notes
For details, see Notes.
Scenarios
- High-performance relational databases
- NoSQL databases (such as Cassandra and MongoDB) and ElasticSearch
Specifications
Flavor |
vCPUs |
Memory (GiB) |
Max./Assured Network Bandwidth (Gbit/s) |
Max. Network PPS (10,000) |
Network Connections (10,000) |
Max. NIC Queues |
Max. NICs |
Max. Supplementary NICs |
Local Disks (GiB) |
Virtualization |
---|---|---|---|---|---|---|---|---|---|---|
i7n.2xlarge.4 |
8 |
32 |
10/3.4 |
120 |
100 |
4 |
4 |
64 |
1 × 1,600 GiB NVMe |
KVM |
i7n.4xlarge.4 |
16 |
64 |
15/6.7 |
200 |
150 |
4 |
6 |
96 |
2 × 1,600 GiB NVMe |
KVM |
i7n.8xlarge.4 |
32 |
128 |
25/13.5 |
400 |
300 |
8 |
8 |
192 |
4 × 1,600 GiB NVMe |
KVM |
i7n.12xlarge.4 |
48 |
192 |
30/20 |
500 |
400 |
16 |
8 |
256 |
6 × 1,600 GiB NVMe |
KVM |
i7n.16xlarge.4 |
64 |
256 |
35/27 |
600 |
500 |
16 |
8 |
256 |
8 × 1,600 GiB NVMe |
KVM |
i7n.24xlarge.4 |
96 |
420 |
44/40 |
800 |
800 |
32 |
8 |
256 |
12 × 1,600 GiB NVMe |
KVM |
i7n.2xlarge.8 |
8 |
64 |
10/3.4 |
120 |
100 |
4 |
4 |
64 |
1 × 1,600 GiB NVMe |
KVM |
i7n.4xlarge.8 |
16 |
128 |
15/6.7 |
200 |
150 |
4 |
6 |
96 |
2 × 1,600 GiB NVMe |
KVM |
i7n.8xlarge.8 |
32 |
256 |
25/13.5 |
400 |
300 |
8 |
8 |
192 |
4 × 1,600 GiB NVMe |
KVM |
i7n.12xlarge.8 |
48 |
384 |
30/20 |
500 |
400 |
16 |
8 |
256 |
6 × 1,600 GiB NVMe |
KVM |
i7n.16xlarge.8 |
64 |
512 |
35/27 |
600 |
500 |
16 |
8 |
256 |
8 × 1,600 GiB NVMe |
KVM |
i7n.24xlarge.8 |
96 |
768 |
44/40 |
800 |
800 |
32 |
8 |
256 |
12 × 1,600 GiB NVMe |
KVM |
Ultra-high I/O Ir3
Overview
Ir3 ECSs use the 2nd Generation Intel® Xeon® Scalable processors to offer powerful and stable computing performance, Huawei-proprietary 25GE high-speed intelligent NICs to support ultra-high network bandwidth and PPS, and high-performance local NVMe SSDs to provide high storage IOPS and low read/write latency.
Notes
For details, see Notes.
Scenarios
- High-performance relational databases
- NoSQL databases (such as Cassandra and MongoDB)
- ElasticSearch
Specifications
Flavor |
vCPUs |
Memory (GiB) |
Max./Assured Network Bandwidth (Gbit/s) |
Max. Network PPS (10,000) |
Network Connections (10,000) |
Max. NIC Queues |
Local Disks (GiB) |
Max. NICs |
Virtualization |
---|---|---|---|---|---|---|---|---|---|
ir3.large.4 |
2 |
8 |
4/1.2 |
40 |
50 |
2 |
2 × 50 |
2 |
KVM |
ir3.xlarge.4 |
4 |
16 |
8/2.4 |
80 |
50 |
2 |
2 × 100 |
3 |
KVM |
ir3.2xlarge.4 |
8 |
32 |
15/4.5 |
140 |
100 |
4 |
2 × 200 |
4 |
KVM |
ir3.4xlarge.4 |
16 |
64 |
20/9 |
250 |
150 |
8 |
2 × 400 |
8 |
KVM |
ir3.8xlarge.4 |
32 |
128 |
30/18 |
450 |
300 |
16 |
2 × 800 |
8 |
KVM |
Ultra-high I/O I3
Overview
I3 ECSs use Intel® Xeon® Scalable processors and high-performance local NVMe SSDs to provide high storage IOPS and low read/write latency.
Notes
For details, see Notes.
Scenarios
- High-performance relational databases
- NoSQL databases (such as Cassandra and MongoDB) and ElasticSearch
Specifications
Flavor |
vCPUs |
Memory (GiB) |
Max./Assured Network Bandwidth (Gbit/s) |
Max. Network PPS (10,000) |
Max. NIC Queues |
Local Disks (GiB) |
Max. NICs |
Virtualization |
---|---|---|---|---|---|---|---|---|
i3.2xlarge.8 |
8 |
64 |
2.5/2.5 |
100 |
4 |
1 × 1,600 GiB NVMe |
4 |
KVM |
i3.4xlarge.8 |
16 |
128 |
5/5 |
150 |
4 |
2 × 1,600 GiB NVMe |
8 |
KVM |
i3.8xlarge.8 |
32 |
256 |
10/10 |
200 |
8 |
4 × 1,600 GiB NVMe |
8 |
KVM |
i3.12xlarge.8 |
48 |
384 |
15/15 |
240 |
8 |
6 × 1,600 GiB NVMe |
8 |
KVM |
i3.15xlarge.8 |
60 |
512 |
25/25 |
500 |
16 |
7 × 1,600 GiB NVMe |
8 |
KVM |
i3.16xlarge.8 |
64 |
512 |
25/25 |
500 |
16 |
8 × 1,600 GiB NVMe |
8 |
KVM |
Scenarios
- High-performance relational databases
- NoSQL databases (such as Cassandra and MongoDB) and ElasticSearch
Local Disk Performance
Table 10 lists the local disk IOPS of D7i instance flavors. Table 11 lists the performance of a single local disk used by a D7i ECS.
Flavor |
Maximum IOPS for Random 4 KB Read |
---|---|
d7i.2xlarge.4 |
960,000 |
d7i.4xlarge.4 |
1,920,000 |
d7i.8xlarge.4 |
3,840,000 |
d7i.12xlarge.4 |
5,760,000 |
d7i.16xlarge.4 |
7,680,000 |
d7i.24xlarge.4 |
11,520,000 |
Metric |
Performance |
---|---|
Disk capacity |
15.36 TB |
IOPS for random 4 KB read |
960,000 |
IOPS for random 4 KB write |
75,000 |
Read throughput |
4.3 GiB/s |
Write throughput |
3.8 GiB/s |
Access latency |
Within microseconds |
Table 12 lists the local disk IOPS of Ir7 instance flavors.
Flavor |
Maximum IOPS for Random 4 KB Read |
---|---|
ir7.large.4 |
28,125 |
ir7.xlarge.4 |
56,250 |
ir7.2xlarge.4 |
112,500 |
ir7.4xlarge.4 |
225,000 |
ir7.8xlarge.4 |
450,000 |
ir7.16xlarge.4 |
900,000 |
Table 13 lists the local disk IOPS of I7 instance flavors. Table 14 lists the performance of a single local disk used by an I7 ECS.
Flavor |
Maximum IOPS for Random 4 KB Read |
---|---|
i7.2xlarge.4 |
900,000 |
i7.4xlarge.4 |
1,800,000 |
i7.8xlarge.4 |
3,600,000 |
i7.12xlarge.4 |
5,400,000 |
i7.16xlarge.4 |
7,200,000 |
i7.24xlarge.4 |
10,800,000 |
Metric |
Performance |
---|---|
Disk capacity |
1.6 TB |
IOPS for random 4 KB read |
900,000 |
IOPS for random 4 KB write |
250,000 |
Read throughput |
6.2 GiB/s |
Write throughput |
2.1 GiB/s |
Access latency |
Within microseconds |
Table 15 lists the local disk IOPS of aI7 instance flavors. Table 16 lists the performance of a single local disk used by an aI7 ECS.
Flavor |
Maximum IOPS for Random 4 KB Read |
---|---|
ai7.2xlarge.8 |
900,000 |
ai7.16xlarge.8 |
7,200,000 |
ai7.24xlarge.8 |
10,800,000 |
Metric |
Performance |
---|---|
Disk capacity |
1.6 TB |
IOPS for random 4 KB read |
900,000 |
IOPS for random 4 KB write |
200,000 |
Read throughput |
6.6 GiB/s |
Write throughput |
2 GiB/s |
Access latency |
Within microseconds |
Table 17 lists the local disk IOPS of Ir7n instance flavors.
Flavor |
Maximum IOPS for Random 4 KB Read |
---|---|
ir7n.large.4 |
28,125 |
ir7n.xlarge.4 |
56,250 |
ir7n.2xlarge.4 |
112,500 |
ir7n.4xlarge.4 |
225,000 |
ir7n.8xlarge.4 |
450,000 |
ir7n.16xlarge.4 |
900,000 |
Table 18 lists the local disk IOPS of I7n instance flavors. Table 19 lists the performance of a single local disk used by an I7n ECS.
Flavor |
Maximum IOPS for Random 4 KB Read |
---|---|
i7n.2xlarge.4 |
900,000 |
i7n.8xlarge.4 |
3,600,000 |
i7n.12xlarge.4 |
5,400,000 |
i7n.16xlarge.4 |
7,200,000 |
i7n.24xlarge.4 |
10,800,000 |
Metric |
Performance |
---|---|
Disk capacity |
1.6 TB |
IOPS for random 4 KB read |
900,000 |
IOPS for random 4 KB write |
250,000 |
Read throughput |
6.2 GiB/s |
Write throughput |
2.1 GiB/s |
Access latency |
Within microseconds |
Table 20 lists the local disk IOPS of Ir3 instance flavors.
Flavor |
Maximum IOPS for Random 4 KB Read |
---|---|
ir3.large.4 |
25,000 |
ir3.xlarge.4 |
50,000 |
ir3.2xlarge.4 |
100,000 |
ir3.4xlarge.4 |
200,000 |
ir3.8xlarge.4 |
400,000 |
Table 21 lists the local disk IOPS of I3 instance flavors. Table 22 lists the performance of a single local disk used by an I3 ECS.
Notes
- For details about the OSs supported by an ultra-high I/O ECS, see OSs Supported by Different Types of ECSs.
- If the host where an ultra-high I/O ECS is deployed is faulty, the ECS cannot be restored through live migration.
- If the host is faulty or subhealthy, you need to stop the ECS for hardware repair.
- In case of system maintenance or hardware faults, the ECS will be redeployed (to ensure HA) and cold migrated to another host. The local disk data of the ECS will not be retained.
- Ultra-high I/O ECSs do not support specifications change.
- Ultra-high I/O ECSs do not support local disk snapshots or backups.
- Ultra-high I/O ECSs can use local disks, and can also have EVS disks attached to provide a larger storage size. Note the following when using the two types of storage media:
- Only an EVS disk, not a local disk, can be used as the system disk of an ultra-high I/O ECS.
- Both EVS disks and local disks can be used as data disks of an ultra-high I/O ECS.
- An ultra-high I/O ECS can have a maximum of 60 attached disks (including VBD, SCSI, and local disks).
- Modify the fstab file to set automatic disk mounting at ECS start. For details, see Initializing a Linux Data Disk (Less Than or Equal to 2 TiB).
- The local disk data of an ultra-high I/O ECS may be lost if an exception occurs, such as physical server breakdown or local disk damage. If your application did not use the data reliability architecture, it is a good practice to use EVS disks to build your ECS.
- When an ultra-high I/O ECS is deleted, the data on local NVMe SSDs will also be automatically deleted, which can take some time. As a result, an ultra-high I/O ECS takes a longer time than other ECSs to be deleted. Back up the data before deleting such an ECS.
- The data reliability of local disks depends on the reliability of physical servers and hard disks, which are SPOF-prone. It is a good practice to use data redundancy mechanisms at the application layer to ensure data availability. Use EVS disks to store service data that needs to be stored for a long time.
- The device name of a local disk attached to an ultra-high I/O ECS is /dev/nvme0n1 or /dev/nvme0n2.
- Local disks used by Ir3 ECSs can be split for multiple ECSs to use. If a local disk is damaged, the ECSs that use this disk will be affected.
You are advised to add Ir3 ECSs to an ECS group during the creation process to prevent such failures. For details, see Managing ECS Groups.
- The basic resources, including vCPUs, memory, and image of an ultra-high I/O ECS will continue to be billed after the ECS is stopped. To stop the ECS from being billed, delete it and its associated resources.
- The %util parameter of a local disk indicates a percentage of CPU time during which I/O requests were issued to the device, so it is a perfect indication of how busy the local disk really is. For parallel disks such as NVMe SSD local disks, the %util parameter does not indicate how busy they really are.
- The logical volumes of Ir7, Ir7n, and Ir3 ECSs do not support using the blk flush command to flush cached data to the storage device.
Handling Damaged Local Disks Used by I-Series ECSs
If a local disk attached to an ECS is damaged, perform the following operations to handle this issue:
For a Linux ECS:
- Detach the faulty local disk.
- Run the following command to query the mount point of the faulty disk:
- Run the following command to detach the faulty local disk:
In the example shown in Figure 1, the mount point of /dev/nvme0n1 is /mnt/nvme0. Run the following command:
umount /mnt/nvme0
- Check whether the mount point of the faulty disk is configured in /etc/fstab of the ECS. If yes, comment out the mount point to prevent the ECS from entering the maintenance mode upon ECS startup after the faulty disk is replaced.
- Run the following command to obtain the partition UUID:
In this example, run the following command to obtain the UUID of the /dev/nvme0n1 partition:
blkid /dev/nvme0n1
Information similar to the following is displayed:
/dev/nvme0n1: UUID="b9a07b7b-9322-4e05-ab9b-14b8050cd8cc" TYPE="ext4"
- Run the following command to check whether /etc/fstab contains the automatic mounting information about the disk partition:
cat /etc/fstab
Information similar to the following is displayed:
UUID=b9a07b7b-9322-4e05-ab9b-14b8050cd8cc /mnt ext4 defaults 0 0
- If the mounting information exists, perform the following steps to delete it.
- Run the following command to edit /etc/fstab:
vi /etc/fstab
Use the UUID obtained in 2.a to check whether the mounting information of the local disk is contained in /etc/fstab. If yes, comment out the information. This prevents the ECS from entering the maintenance mode upon ECS startup after the local disk is replaced.
- Press i to enter editing mode.
- Delete or comment out the automatic mounting information of the disk partition.
For example, add a pound sign (#) at the beginning of the following command line to comment out the automatic mounting information:
# UUID=b9a07b7b-9322-4e05-ab9b-14b8050cd8cc /mnt ext4 defaults 0 0
- Press Esc to exit editing mode. Enter :wq and press Enter to save the settings and exit.
- Run the following command to edit /etc/fstab:
- Run the following command to obtain the partition UUID:
- Run the following command to obtain the SN of the local disk:
For example, if the nvme0n1 disk is faulty, obtain the serial number of the nvme0n1 disk.
ll /dev/disk/by-id/
Figure 2 Querying the serial number of the faulty local disk - Stop the ECS and provide the serial number of the faulty disk to technical support personnel to replace the local disk.
After the local disk is replaced, restart the ECS to synchronize the new local disk information to the virtualization layer.
For a Windows ECS:
- Open Computer Management, choose Computer Management (Local) > Storage > Disk Management, and view the disk ID, for example, Disk 1.
- Open Windows PowerShell as an administrator and run the following command to query the disk on which the logical disk is created:
Get-CimInstance -ClassName Win32_LogicalDiskToPartition |select Antecedent, Dependent | fl
Figure 3 Querying the disk on which the logical disk is created - Run the following command to obtain the serial number of the faulty disk according to the mapping between the disk ID and serial number:
Get-Disk | select Number, SerialNumber
Figure 4 Querying the mapping between the disk ID and serial numberIf the serial number cannot be obtained by running the preceding command, see Using a Serial Number to Obtain the Disk Device Name (Windows).
- Stop the ECS and provide the serial number of the faulty disk to technical support personnel to replace the local disk.
After the local disk is replaced, restart the ECS to synchronize the new local disk information to the virtualization layer.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot