Ultra-high I/O ECSs
Overview
Ultra-high I/O ECSs use high-performance local NVMe SSDs to provide high storage input/output operations per second (IOPS) and low read/write latency. You can create such ECSs with high-performance local NVMe SSDs attached on the management console.
Series |
Compute |
Disk Type |
Network |
---|---|---|---|
I3nl |
|
|
|
I3 |
|
|
|
I7n |
|
|
|
Scenarios
- Ultra-high I/O ECSs are suitable for high-performance relational databases.
- Ultra-high I/O ECSs are suitable for NoSQL databases (such as Cassandra and MongoDB) and ElasticSearch.
Specifications
Flavor |
vCPUs |
Memory (GiB) |
Max./Assured Bandwidth (Gbit/s) |
Max. PPS (10,000) |
Max. NIC Queues |
Max. NICs |
Max. Supplementary NICs |
Local Disk |
Virtualization |
---|---|---|---|---|---|---|---|---|---|
i7n.2xlarge.4 |
8 |
32 |
10/3.4 |
120 |
4 |
4 |
64 |
1 × 1.6T NVMe |
KVM |
i7n.8xlarge.4 |
32 |
128 |
25/13.5 |
400 |
8 |
8 |
192 |
4 × 1.6T NVMe |
KVM |
i7n.12xlarge.4 |
48 |
192 |
30/20 |
500 |
16 |
8 |
256 |
6 × 1.6T NVMe |
KVM |
i7n.16xlarge.4 |
64 |
256 |
35/27 |
600 |
16 |
8 |
256 |
8 × 1.6T NVMe |
KVM |
i7n.24xlarge.4 |
96 |
420 |
44/40 |
800 |
32 |
8 |
256 |
12 × 1.6T NVMe |
KVM |
Flavor |
vCPUs |
Memory (GiB) |
Max./Assured Bandwidth (Gbit/s) |
Max. PPS (10,000) |
Max. NIC Queues |
Max. NICs |
Local Disks |
Virtualization |
---|---|---|---|---|---|---|---|---|
i3.2xlarge.8 |
8 |
64 |
2.5/2.5 |
100 |
4 |
4 |
1 × 1,600 GiB NVMe |
KVM |
i3.4xlarge.8 |
16 |
128 |
5/5 |
150 |
4 |
8 |
2 × 1,600 GiB NVMe |
KVM |
i3.8xlarge.8 |
32 |
256 |
10/10 |
200 |
8 |
8 |
4 × 1,600 GiB NVMe |
KVM |
i3.12xlarge.8 |
48 |
384 |
15/15 |
240 |
8 |
8 |
6 × 1,600 GiB NVMe |
KVM |
i3.16xlarge.8 |
64 |
512 |
25/25 |
500 |
16 |
8 |
8 × 1,600 GiB NVMe |
KVM |
Flavor |
vCPUs |
Memory (GiB) |
Max./Assured Bandwidth (Gbit/s) |
Max. PPS (10,000) |
Max. NIC Queues |
Max. NICs |
Local Disks |
Virtualization |
---|---|---|---|---|---|---|---|---|
i3nl.2xlarge.8 |
8 |
64 |
15/4 |
120 |
4 |
4 |
1 x 1.6 TiB NVMe |
KVM |
i3nl.4xlarge.8 |
16 |
128 |
20/8 |
224 |
8 |
8 |
2 x 1.6 TiB NVMe |
KVM |
i3nl.8xlarge.8 |
32 |
256 |
30/16 |
440 |
16 |
8 |
4 x 1.6 TiB NVMe |
KVM |
i3nl.12xlarge.8 |
48 |
384 |
35/27 |
750 |
16 |
8 |
6 x 1.6 TiB NVMe |
KVM |
i3nl.16xlarge.8 |
64 |
512 |
40/32 |
800 |
32 |
8 |
8 x 1.6 TiB NVMe |
KVM |
Local Disk Performance
Table 5 and Table 6 list the IOPS performance of local disks and specifications of a single local disk attached to an I7n ECS.
Flavor |
Maximum IOPS for Random 4 KB Read |
---|---|
i7n.2xlarge.4 |
900,000 |
i7n.8xlarge.4 |
3,600,000 |
i7n.12xlarge.4 |
5,400,000 |
i7n.16xlarge.4 |
7,200,000 |
i7n.24xlarge.4 |
10,800,000 |
Metric |
Performance |
---|---|
Disk capacity |
1.6 TB |
IOPS for random 4 KB read |
900,000 |
IOPS for random 4 KB write |
250,000 |
Read throughput |
6.2 GiB/s |
Write throughput |
2.1 GiB/s |
Access latency |
Within microseconds |
Table 7 and Table 8 list the IOPS performance of local disks and specifications of a single local disk attached to an I3 ECS.
Flavor |
Maximum IOPS for Random 4 KB Read |
---|---|
i3.2xlarge.8 |
750,000 |
i3.4xlarge.8 |
1,500,000 |
i3.8xlarge.8 |
3,000,000 |
i3.12xlarge.8 |
4,500,000 |
i3.15xlarge.8 |
5,250,000 |
i3.16xlarge.8 |
6,000,000 |
Metric |
Performance |
---|---|
Disk capacity |
1.6 TB |
IOPS for random 4 KB read |
750,000 |
IOPS for random 4 KB write |
200,000 |
Read throughput |
2.9 GiB/s |
Write throughput |
1.9 GiB/s |
Access latency |
Within microseconds |
Table 9 and Table 10 list the IOPS performance of local disks and specifications of a single local disk attached to an I3nl ECS.
Notes
- If the host where an ultra-high I/O ECS is deployed is faulty, the ECS cannot be restored through live migration.
- If the host is faulty or subhealthy, you need to stop the ECS for hardware repair.
- In case of system maintenance or hardware faults, the ECS will be redeployed (to ensure HA) and cold migrated to another host. The local disk data of the ECS will not be retained.
- Ultra-high I/O ECSs do not support specifications change.
- Ultra-high I/O ECSs do not support local disk snapshots or backups.
- Ultra-high I/O ECSs can use local disks, and can also have EVS disks attached to provide a larger storage size. Note the following when using the two types of storage media:
- Only an EVS disk, not a local disk, can be used as the system disk of an ultra-high I/O ECS.
- Both EVS disks and local disks can be used as data disks of an ultra-high I/O ECS.
- An ultra-high I/O ECS can have a maximum of 60 attached disks (including VBD, SCSI, and local disks).
- Modify the fstab file to set automatic disk mounting at ECS start. For details, see Configuring Automatic Mounting at System Start.
- The local disk data of an ultra-high I/O ECS if an exception occurs, such as physical server breakdown or local disk damage. If your application does not use the data reliability architecture, it is a good practice to use EVS disks to build your ECS.
- When an ultra-high I/O ECS is deleted, the data on local NVMe SSDs will also be automatically deleted, which can take some time. As a result, an ultra-high I/O ECS takes a longer time than other ECSs to be deleted. Back up the data before deleting such an ECS.
- The data reliability of local disks depends on the reliability of physical servers and hard disks, which are SPOF-prone. It is a good practice to use data redundancy mechanisms at the application layer to ensure data availability. Use EVS disks to store service data that needs to be stored for a long time.
- The device name of a local disk attached to an ultra-high I/O ECS is /dev/nvme0n1 or /dev/nvme0n2.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot