Updated on 2024-02-21 GMT+08:00

Ultra-high I/O ECSs

Overview

Ultra-high I/O ECSs use high-performance local NVMe SSDs to provide high storage input/output operations per second (IOPS) and low read/write latency. You can create such ECSs on the management console.

Available now: Ir3, Ir7, I7, aI7, I7n, Ir7n, and I3

Table 1 Ultra-high I/O ECS features

Type

Compute

Disk Type

Network

Ir7

  • vCPU to memory ratio: 1:4
  • Number of vCPUs: 2 to 64
  • 3rd Generation Intel® Xeon® Platinum Scalable Processor
  • Basic/Turbo frequency: 3.0 GHz/3.5 GHz
  • High I/O
  • General Purpose SSD
  • Ultra-high I/O
  • Extreme SSD
  • General Purpose SSD V2
  • Ultra-high packets per second (PPS) throughput
  • An ECS with higher specifications has better network performance.
  • Maximum PPS: 6,000,000
  • Maximum intranet bandwidth: 40 Gbit/s

I7

  • vCPU to memory ratio: 1:4
  • Number of vCPUs: 8 to 96
  • 3rd Generation Intel® Xeon® Platinum Scalable Processor
  • Basic/Turbo frequency: 3.0 GHz/3.5 GHz
  • High I/O
  • General Purpose SSD
  • Ultra-high I/O
  • Extreme SSD
  • General Purpose SSD V2
  • Ultra-high PPS throughput
  • An ECS with higher specifications has better network performance.
  • Maximum PPS: 8,000,000
  • Maximum intranet bandwidth: 40 Gbit/s

aI7

  • vCPU to memory ratio: 1:8
  • Number of vCPUs: 8 to 96
  • Basic/Turbo frequency: 2.45GHz/3.5GHz
  • High I/O
  • General Purpose SSD
  • Ultra-high I/O
  • Extreme SSD
  • General Purpose SSD V2
  • Ultra-high PPS throughput
  • An ECS with higher specifications has better network performance.
  • Maximum PPS: 8,000,000
  • Maximum intranet bandwidth: 40 Gbit/s

Ir7n

  • vCPU to memory ratio: 1:4
  • Number of vCPUs: 2 to 64
  • 3rd Generation Intel® Xeon® Scalable Processor
  • Basic/Turbo frequency: 2.6 GHz/3.5 GHz
  • High I/O
  • General Purpose SSD
  • Ultra-high I/O
  • Extreme SSD
  • Ultra-high PPS throughput
  • An ECS with higher specifications has better network performance.
  • Maximum PPS: 6,000,000
  • Maximum intranet bandwidth: 40 Gbit/s

I7n

  • vCPU to memory ratio: 1:4
  • Number of vCPUs: 8 to 96
  • 3rd Generation Intel® Xeon® Scalable Processor
  • Basic/Turbo frequency: 2.6 GHz/3.5 GHz
  • High I/O
  • General Purpose SSD
  • Ultra-high I/O
  • Extreme SSD
  • General Purpose SSD V2
  • Ultra-high PPS throughput
  • An ECS with higher specifications has better network performance.
  • Maximum PPS: 8,000,000
  • Maximum intranet bandwidth: 40 Gbit/s

Ir3

  • vCPU to memory ratio: 1:4
  • Number of vCPUs: 2 to 32
  • 2nd Generation Intel® Xeon® Scalable Processor
  • Basic/Turbo frequency: 2.6 GHz/3.5 GHz
  • High I/O
  • General Purpose SSD
  • Ultra-high I/O
  • Extreme SSD
  • General Purpose SSD V2
  • Ultra-high PPS throughput
  • An ECS with higher specifications has better network performance.
  • Maximum PPS: 4,500,000
  • Maximum intranet bandwidth: 30 Gbit/s

I3

  • vCPU to memory ratio: 1:8
  • Number of vCPUs: 8 to 64
  • Intel® Xeon® Scalable Processor
  • Basic/Turbo frequency: 3.0 GHz/3.4 GHz
  • High I/O
  • General Purpose SSD
  • Ultra-high I/O
  • Extreme SSD
  • General Purpose SSD V2
  • Ultra-high PPS throughput
  • An ECS with higher specifications has better network performance.
  • Maximum PPS: 5,000,000
  • Maximum intranet bandwidth: 25 Gbit/s

Ultra-high I/O Ir7

Overview

Each Ir7 ECS uses the third-generation Intel® Xeon® Scalable processor and two small-capacity high-performance local NVMe SSDs to provide high storage IOPS and low read/write latency.

Notes

For details, see Notes.

Scenarios

  • High-performance relational databases.
  • NoSQL databases (such as Cassandra and MongoDB)
  • ElasticSearch

Specifications

Table 2 Ir7 ECS specifications

Flavor

vCPUs

Memory

(GiB)

Max./Assured Bandwidth

(Gbit/s)

Max. PPS

(10,000)

Max. NIC Queues

Max. NICs

Local Disks

(GiB)

Virtualization

ir7.large.4

2

8

3/0.8

40

2

3

2 × 50

KVM

ir7.xlarge.4

4

16

6/1.5

80

2

3

2 × 100

KVM

ir7.2xlarge.4

8

32

15/3.1

150

4

4

2 × 200

KVM

ir7.4xlarge.4

16

64

20/6.2

300

4

6

2 × 400

KVM

ir7.8xlarge.4

32

128

30/12

400

8

8

2 × 800

KVM

ir7.16xlarge.4

64

256

40/25

600

16

8

2 × 1,600

KVM

Ultra-high I/O I7

Overview

Each I7 ECS uses the third-generation Intel® Xeon® Scalable processor and large-capacity high-performance local NVMe SSDs to provide high storage IOPS and low read/write latency.

Notes

For details, see Notes.

Scenarios

  • High-performance relational databases.
  • NoSQL databases (such as Cassandra and MongoDB) and ElasticSearch

Specifications

Table 3 I7 ECS specifications

Flavor

vCPUs

Memory

(GiB)

Max./Assured Bandwidth

(Gbit/s)

Max. PPS

(10,000)

Max. NIC Queues

Max. NICs

Max. Supplementary NICs

Local Disks

(GiB)

Virtualization

i7.2xlarge.4

8

32

10/3

120

4

4

64

1 × 1,600 GiB NVMe

KVM

i7.4xlarge.4

16

64

15/6

200

4

6

96

2 × 1,600 GiB NVMe

KVM

i7.8xlarge.4

32

128

25/12

400

8

8

192

4 × 1,600 GiB NVMe

KVM

i7.12xlarge.4

48

192

30/18

500

16

8

256

6 × 1,600 GiB NVMe

KVM

i7.16xlarge.4

64

256

35/24

600

16

8

256

8 × 1,600 GiB NVMe

KVM

i7.24xlarge.4

96

384

44/36

800

32

8

256

12 × 1,600 GiB NVMe

KVM

aI7

Overview

aI7 ECSs use the next-generation scalable processor and large-capacity high-performance local NVMe SSDs to provide high storage IOPS and low read/write latency.

Notes

For details, see Notes.

Scenarios

  • High-performance relational databases.
  • NoSQL databases (such as Cassandra and MongoDB) and ElasticSearch

Specifications

Table 4 aI7 ECS specifications

Flavor

vCPUs

Memory

(GiB)

Max./Assured Bandwidth

(Gbit/s)

Max. PPS

(10,000)

Max. NIC Queues

Max. NICs

Max. Supplementary NICs

Local Disks

(GiB)

Virtualization

ai7.2xlarge.8

8

64

4/2.5

100

8

8

64

1 × 1,600 GiB NVMe

KVM

ai7.4xlarge.8

16

128

8/5

200

16

8

128

2 × 1,600 GiB NVMe

KVM

ai7.8xlarge.8

32

256

15/8

300

16

8

256

4 × 1,600 GiB NVMe

KVM

ai7.12xlarge.8

48

384

22/12

400

16

8

256

6 × 1,600 GiB NVMe

KVM

ai7.16xlarge.8

64

512

28/16

550

24

12

256

8 × 1,600 GiB NVMe

KVM

ai7.24xlarge.8

96

768

40/25

800

24

12

256

12 × 1,600 GiB NVMe

KVM

Ultra-high I/O Ir7n

Overview

Ir7n ECSs use the 3rd Generation Intel® Xeon® Scalable processors to offer powerful and stable computing performance, 25GE high-speed intelligent NICs to support ultra-high network bandwidth and PPS, and high-performance local NVMe SSDs to provide high storage IOPS and low read/write latency.

Notes

For details, see Notes.

Scenarios

  • High-performance relational databases.
  • NoSQL databases (such as Cassandra and MongoDB)
  • ElasticSearch

Specifications

Table 5 Ir7n ECS specifications

Flavor

vCPUs

Memory

(GiB)

Max./Assured Bandwidth

(Gbit/s)

Max. PPS

(10,000)

Max. NIC Queues

Max. NICs

Max. Supplementary NICs

Local Disks

(GiB)

Virtualization

ir7n.large.4

2

8

3/0.9

40

2

3

32

2 × 50

KVM

ir7n.xlarge.4

4

16

6/1.8

80

2

3

32

2 × 100

KVM

ir7n.2xlarge.4

8

32

15/3.6

150

4

4

64

2 × 200

KVM

ir7n.4xlarge.4

16

64

20/7.3

300

4

6

96

2 × 400

KVM

ir7n.8xlarge.4

32

128

30/14.5

400

8

8

192

2 × 800

KVM

ir7n.16xlarge.4

64

256

40/29

600

16

8

256

2 × 1,600

KVM

Ultra-high I/O I7n

Overview

I7n ECSs use 3rd Intel® Xeon® Scalable processors and high-performance local NVMe SSDs to provide high storage IOPS and low read/write latency.

Notes

For details, see Notes.

Scenarios

  • High-performance relational databases.
  • NoSQL databases (such as Cassandra and MongoDB) and ElasticSearch

Specifications

Table 6 I7n ECS specifications

Flavor

vCPUs

Memory

(GiB)

Max./Assured Bandwidth

(Gbit/s)

Max. PPS

(10,000)

Max. NIC Queues

Max. NICs

Max. Supplementary NICs

Local Disks

(GiB)

Virtualization

i7n.2xlarge.4

8

32

10/3.4

120

4

4

64

1 × 1,600 GiB NVMe

KVM

i7n.4xlarge.4

16

64

15/6.7

200

4

6

96

2 × 1,600 GiB NVMe

KVM

i7n.8xlarge.4

32

128

25/13.5

400

8

8

192

4 × 1,600 GiB NVMe

KVM

i7n.12xlarge.4

48

192

30/20

500

16

8

256

6 × 1,600 GiB NVMe

KVM

i7n.16xlarge.4

64

256

35/27

600

16

8

256

8 × 1,600 GiB NVMe

KVM

i7n.24xlarge.4

96

420

44/20

800

32

8

256

12 × 1,600 GiB NVMe

KVM

Ultra-high I/O Ir3 ECS

Overview

Ir3 ECSs use 2nd Generation Intel® Xeon® Scalable processors to offer powerful and stable computing performance, 25GE high-speed intelligent NICs to support ultra-high network bandwidth and PPS, and high-performance local NVMe SSDs to provide high storage IOPS and low read/write latency.

Notes

For details, see Notes.

Scenarios

  • High-performance relational databases.
  • NoSQL databases (such as Cassandra and MongoDB)
  • ElasticSearch

Specifications

Table 7 Ir3 ECS specifications

Flavor

vCPUs

Memory

(GiB)

Max./Assured Bandwidth

(Gbit/s)

Max. PPS

(10,000)

Max. NIC Queues

Local Disks

(GiB)

Max. NICs

Virtualization

ir3.large.4

2

8

4/1.2

40

2

2 × 50

2

KVM

ir3.xlarge.4

4

16

8/2.4

80

2

2 × 100

3

KVM

ir3.2xlarge.4

8

32

15/4.5

140

4

2 × 200

4

KVM

ir3.4xlarge.4

16

64

20/9

250

8

2 × 400

8

KVM

ir3.8xlarge.4

32

128

30/18

450

16

2 × 800

8

KVM

Ultra-high I/O I3 ECSs

Overview

I3 ECSs use Intel® Xeon® Scalable processors and high-performance local NVMe SSDs to provide high storage IOPS and low read/write latency.

Notes

For details, see Notes.

Scenarios

  • High-performance relational databases.
  • NoSQL databases (such as Cassandra and MongoDB) and ElasticSearch

Specifications

Table 8 I3 ECS specifications

Flavor

vCPUs

Memory

(GiB)

Max./Assured Bandwidth

(Gbit/s)

Max. PPS

(10,000)

Max. NIC Queues

Local Disks

(GiB)

Max. NICs

Virtualization

i3.2xlarge.8

8

64

2.5/2.5

100

4

1 × 1,600 GiB NVMe

4

KVM

i3.4xlarge.8

16

128

5/5

150

4

2 × 1,600 GiB NVMe

8

KVM

i3.8xlarge.8

32

256

10/10

200

8

4 × 1,600 GiB NVMe

8

KVM

i3.12xlarge.8

48

384

15/15

240

8

6 × 1,600 GiB NVMe

8

KVM

i3.15xlarge.8

60

512

25/25

500

16

7 × 1,600 GiB NVMe

8

KVM

i3.16xlarge.8

64

512

25/25

500

16

8 × 1,600 GiB NVMe

8

KVM

Scenarios

  • Ultra-high I/O ECSs are suitable for high-performance relational databases.
  • Ultra-high I/O ECSs are suitable for NoSQL databases (such as Cassandra and MongoDB) and ElasticSearch.

Features

Table 9 lists the IOPS performance of local disks attached to an Ir7 ECS.

Table 9 IOPS performance of local disks used by Ir7 ECSs

Flavor

Maximum IOPS for Random 4 KB Read

ir7.large.4

28,125

ir7.xlarge.4

56,250

ir7.2xlarge.4

112,500

ir7.4xlarge.4

225,000

ir7.8xlarge.4

450,000

ir7.16xlarge.4

900,000

Table 10 and Table 11 list the IOPS performance of local disks and specifications of a single local disk attached to an I7 ECS.

Table 10 IOPS performance of local disks used by I7 ECSs

Flavor

Maximum IOPS for Random 4 KB Read

i7.2xlarge.4

900,000

i7.4xlarge.4

1,800,000

i7.8xlarge.4

3,600,000

i7.12xlarge.4

5,400,000

i7.16xlarge.4

7,200,000

i7.24xlarge.4

10,800,000

Table 11 Specifications of a single local disk attached to an I7 ECS

Metric

Performance

Disk capacity

1.6 TB

IOPS for random 4 KB read

900,000

IOPS for random 4 KB write

250,000

Read throughput

6.2 GiB/s

Write throughput

2.1 GiB/s

Access latency

Within microseconds

Table 12 and Table 13 list the IOPS performance of local disks and specifications of a single local disk attached to an aI7 ECS.

Table 12 IOPS performance of local disks used by aI7 ECSs

Flavor

Maximum IOPS for Random 4 KB Read

ai7.2xlarge.8

900,000

ai7.16xlarge.8

7,200,000

ai7.24xlarge.8

10,800,000

Table 13 Specifications of a single local disk attached to an aI7 ECS

Metric

Performance

Disk capacity

1.6 TB

IOPS for random 4 KB read

900,000

IOPS for random 4 KB write

200,000

Read throughput

6.6 GiB/s

Write throughput

2 GiB/s

Access latency

Within microseconds

Table 14 lists the IOPS performance of local disks attached to an Ir7n ECS.

Table 14 IOPS performance of local disks used by Ir7n ECSs

Flavor

Maximum IOPS for Random 4 KB Read

ir7n.large.4

28,125

ir7n.xlarge.4

56,250

ir7n.2xlarge.4

112,500

ir7n.4xlarge.4

225,000

ir7n.8xlarge.4

450,000

ir7n.16xlarge.4

900,000

Table 15 and Table 16 list the IOPS performance of local disks and specifications of a single local disk attached to an I7n ECS.

Table 15 IOPS performance of local disks used by I7n ECSs

Flavor

Maximum IOPS for Random 4 KB Read

i7n.2xlarge.4

900,000

i7n.8xlarge.4

3,600,000

i7n.12xlarge.4

5,400,000

i7n.16xlarge.4

7,200,000

i7n.24xlarge.4

10,800,000

Table 16 Specifications of a single local disk attached to an I7n ECS

Metric

Performance

Disk capacity

1.6 TB

IOPS for random 4 KB read

900,000

IOPS for random 4 KB write

250,000

Read throughput

6.2 GiB/s

Write throughput

2.1 GiB/s

Access latency

Within microseconds

Table 17 lists the IOPS performance of local disks attached to an Ir3 ECS.

Table 17 IOPS performance of local disks used by Ir3 ECSs

Flavor

Maximum IOPS for Random 4 KB Read

ir3.large.4

25,000

ir3.xlarge.4

50,000

ir3.2xlarge.4

100,000

ir3.4xlarge.4

200,000

ir3.8xlarge.4

400,000

Table 18 and Table 19 list the IOPS performance of local disks and specifications of a single local disk attached to an I3 ECS.

Table 18 IOPS performance of local disks used by I3 ECSs

Flavor

Maximum IOPS for Random 4 KB Read

i3.2xlarge.8

750,000

i3.4xlarge.8

1,500,000

i3.8xlarge.8

3,000,000

i3.12xlarge.8

4,500,000

i3.15xlarge.8

5,250,000

i3.16xlarge.8

6,000,000

Table 19 Specifications of a single I3 local disk

Metric

Performance

Disk capacity

1.6 TB

IOPS for random 4 KB read

750,000

IOPS for random 4 KB write

200,000

Read throughput

2.9 GiB/s

Write throughput

1.9 GiB/s

Access latency

Within microseconds

Notes

  • For details about the OSs supported by an ultra-high I/O ECS, see OSs Supported by Different Types of ECSs.
  • If the host where an ultra-high I/O ECS is deployed is faulty, the ECS cannot be restored through live migration.
    • If the host is faulty or subhealthy, you need to stop the ECS for hardware repair.
    • In case of system maintenance or hardware faults, the ECS will be redeployed (to ensure HA) and cold migrated to another host. The local disk data of the ECS will not be retained.
  • Ultra-high I/O ECSs do not support specifications change.
  • Ultra-high I/O ECSs do not support local disk snapshots or backups.
  • Ultra-high I/O ECSs can use local disks, and can also have EVS disks attached to provide a larger storage size. Note the following when using the two types of storage media:
    • Only an EVS disk, not a local disk, can be used as the system disk of an ultra-high I/O ECS.
    • Both EVS disks and local disks can be used as data disks of an ultra-high I/O ECS.
    • An ultra-high I/O ECS can have a maximum of 60 attached disks (including VBD, SCSI, and local disks).
  • Modify the fstab file to set automatic disk mounting at ECS start. For details, see Setting Automatic Mounting at System Start.
  • The local disk data of an ultra-high I/O ECS if an exception occurs, such as physical server breakdown or local disk damage. If your application does not use the data reliability architecture, it is a good practice to use EVS disks to build your ECS.
  • When an ultra-high I/O ECS is deleted, the data on local NVMe SSDs will also be automatically deleted, which can take some time. As a result, an ultra-high I/O ECS takes a longer time than other ECSs to be deleted. Back up the data before deleting such an ECS.
  • The data reliability of local disks depends on the reliability of physical servers and hard disks, which are SPOF-prone. It is a good practice to use data redundancy mechanisms at the application layer to ensure data availability. Use EVS disks to store service data that needs to be stored for a long time.
  • The device name of a local disk attached to an ultra-high I/O ECS is /dev/nvme0n1 or /dev/nvme0n2.
  • Local disks attached to Ir3 ECSs can be split for multiple ECSs to use. If a local disk is damaged, the ECSs that use this disk will be affected.

    You are advised to add Ir3 ECSs to an ECS group during the creation process to prevent such failures. For details, see Managing ECS Groups.

  • The basic resources, including vCPUs, memory, and image of an ultra-high I/O ECS will continue to be billed after the ECS is stopped. To stop the ECS from being billed, delete it and its associated resources.
  • The %util parameter of a local disk indicates a percentage of CPU time during which I/O requests were issued to the device, so it is a perfect indication of how busy they really are. For parallel disks such as NVMe SSD local disks, the %util parameter does not indicate how busy they really are.

Handling Damaged Local Disks Attached to an ECS of I Series

If a local disk attached to an ECS is damaged, perform the following operations to handle this issue:

For a Linux ECS:

  1. Detach the faulty local disk.
    1. Run the following command to query the mount point of the faulty disk:

      df –Th

      Figure 1 Querying the mount point
    2. Run the following command to detach the faulty local disk:

      umount Mount point

      In the example shown in Figure 1, the mount point of /dev/nvme0n1 is /mnt/nvme0. Run the following command:

      umount /mnt/nvme0

  2. Check whether the mount point of the faulty disk is configured in /etc/fstab of the ECS. If yes, comment out the mount point to prevent the ECS from entering the maintenance mode upon ECS startup after the faulty disk is replaced.
    1. Run the following command to obtain the partition UUID:

      blkid Disk partition

      In this example, run the following command to obtain the UUID of the /dev/nvme0n1 partition:

      blkid /dev/nvme0n1

      Information similar to the following is displayed:

      /dev/nvme0n1: UUID="b9a07b7b-9322-4e05-ab9b-14b8050cd8cc" TYPE="ext4"
    2. Run the following command to check whether /etc/fstab contains the automatic mounting information about the disk partition:

      cat /etc/fstab

      Information similar to the following is displayed:

      UUID=b9a07b7b-9322-4e05-ab9b-14b8050cd8cc    /mnt   ext4    defaults        0 0
    3. If the mounting information exists, perform the following steps to delete it.
      1. Run the following command to edit /etc/fstab:

        vi /etc/fstab

        Use the UUID obtained in 2.a to check whether the mounting information of the local disk is contained in /etc/fstab. If yes, comment out the information. This prevents the ECS from entering the maintenance mode upon ECS startup after the local disk is replaced.

      2. Press i to enter editing mode.
      3. Delete or comment out the automatic mounting information of the disk partition.

        For example, add a pound sign (#) at the beginning of the following command line to comment out the automatic mounting information:

        # UUID=b9a07b7b-9322-4e05-ab9b-14b8050cd8cc    /mnt   ext4    defaults        0 0
      4. Press Esc to exit editing mode. Enter :wq and press Enter to save the settings and exit.
  3. Run the following command to obtain the SN of the local disk:

    For example, if the nvme0n1 disk is faulty, obtain the serial number of the nvme0n1 disk.

    ll /dev/disk/by-id/

    Figure 2 Querying the serial number of the faulty local disk
  4. Stop the ECS and provide the serial number of the faulty disk to technical support personnel to replace the local disk.

    After the local disk is replaced, restart the ECS to synchronize the new local disk information to the virtualization layer.

For a Windows ECS:

  1. Open Computer Management, choose Computer Management (Local) > Storage > Disk Management, and view the disk ID, for example, Disk 1.
  2. Open Windows PowerShell as an administrator and run the following command to query the disk on which the logical disk is created:

    Get-CimInstance -ClassName Win32_LogicalDiskToPartition |select Antecedent, Dependent | fl

    Figure 3 Querying the disk on which the logical disk is created
  3. Run the following command to obtain the serial number of the faulty disk according to the mapping between the disk ID and serial number:

    Get-Disk | select Number, SerialNumber

    Figure 4 Querying the mapping between the disk ID and serial number

    If the serial number cannot be obtained by running the preceding command, see Using a Serial Number to Obtain the Disk Name (Windows).

  4. Stop the ECS and provide the serial number of the faulty disk to technical support personnel to replace the local disk.

    After the local disk is replaced, restart the ECS to synchronize the new local disk information to the virtualization layer.