Updated on 2024-11-27 GMT+08:00

Ultra-high I/O ECSs

Overview

Ultra-high I/O ECSs use high-performance local NVMe SSDs to provide high storage input/output operations per second (IOPS) and low read/write latency. You can create such ECSs with high-performance local NVMe SSDs attached on the management console.

Scenarios

  • Ultra-high I/O ECSs are suitable for high-performance relational databases.
  • Ultra-high I/O ECSs are suitable for NoSQL databases (such as Cassandra and MongoDB) and ElasticSearch.

Specifications

Table 1 Ir7 ECS specifications

Flavor

vCPUs

Memory

(GiB)

Max./Assured Bandwidth

(Gbit/s)

Max. PPS

(10,000)

Max. NIC Queues

Max. NICs

Max. Supplementary NICs

Local Disks

(GiB)

Virtualization

ir7.large.4

2

8

3/0.8

40

2

3

32

2 × 50

KVM

ir7.xlarge.4

4

16

6/1.5

80

2

3

32

2 × 100

KVM

ir7.2xlarge.4

8

32

15/3.1

150

4

4

64

2 × 200

KVM

ir7.4xlarge.4

16

64

20/6.2

300

4

6

96

2 × 400

KVM

ir7.8xlarge.4

32

128

30/12

400

8

8

192

2 × 800

KVM

ir7.16xlarge.4

64

256

40/25

600

16

8

256

2 × 1,600

KVM

Table 2 Ir7n ECS specifications

Flavor

vCPUs

Memory

(GiB)

Max./Assured Bandwidth

(Gbit/s)

Max. PPS

(10,000)

Max. NIC Queues

Max. NICs

Max. Supplementary NICs

Local Disks

(GiB)

Virtualization

ir7n.large.4

2

8

3/0.9

40

2

3

32

2 × 50

KVM

ir7n.xlarge.4

4

16

6/1.8

80

2

3

32

2 × 100

KVM

ir7n.2xlarge.4

8

32

15/3.6

150

4

4

64

2 × 200

KVM

ir7n.4xlarge.4

16

64

20/7.3

300

4

6

96

2 × 400

KVM

ir7n.8xlarge.4

32

128

30/14.5

400

8

8

192

2 × 800

KVM

ir7n.16xlarge.4

64

256

40/29

600

16

8

256

2 × 1,600

KVM

Table 3 I7n ECS specifications

Flavor

vCPUs

Memory

(GiB)

Max./Assured Bandwidth

(Gbit/s)

Max. PPS

(10,000)

Max. NIC Queues

Max. NICs

Max. Supplementary NICs

Local Disks

(GiB)

Virtualization

i7n.2xlarge.4

8

32

10/3.4

120

4

4

64

1 × 1,600 GiB NVMe

KVM

i7n.4xlarge.4

16

64

15/6.7

200

4

6

96

2 × 1,600 GiB NVMe

KVM

i7n.8xlarge.4

32

128

25/13.5

400

8

8

192

4 × 1,600 GiB NVMe

KVM

i7n.12xlarge.4

48

192

30/20

500

16

8

256

6 × 1,600 GiB NVMe

KVM

i7n.16xlarge.4

64

256

35/27

600

16

8

256

8 × 1,600 GiB NVMe

KVM

i7n.24xlarge.4

96

420

44/20

800

32

8

256

12 × 1,600 GiB NVMe

KVM

Table 4 I3 ECS specifications

Flavor

vCPUs

Memory

(GiB)

Max./Assured Bandwidth

(Gbit/s)

Max. PPS

(10,000)

Max. NIC Queues

Max. NICs

Local Disks

Virtualization

i3.2xlarge.8

8

64

2.5/2.5

100

4

4

1 × 1,600 GiB NVMe

KVM

i3.4xlarge.8

16

128

5/5

150

4

8

2 × 1,600 GiB NVMe

KVM

i3.8xlarge.8

32

256

10/10

200

8

8

4 × 1,600 GiB NVMe

KVM

i3.12xlarge.8

48

384

15/15

240

8

8

6 × 1,600 GiB NVMe

KVM

i3.15xlarge.8

60

512

25/25

500

16

8

7 × 1,600 GiB NVMe

KVM

i3.16xlarge.8

64

512

25/25

500

16

8

8 × 1,600 GiB NVMe

KVM

Local Disk Performance

Table 5 lists the IOPS performance of local disks attached to an Ir7 ECS.

Table 5 IOPS performance of local disks used by Ir7 ECSs

Flavor

Maximum IOPS for Random 4 KB Read

ir7.large.4

28,125

ir7.xlarge.4

56,250

ir7.2xlarge.4

112,500

ir7.4xlarge.4

225,000

ir7.8xlarge.4

450,000

ir7.16xlarge.4

900,000

Table 6 lists the IOPS performance of local disks attached to an Ir7n ECS.

Table 6 IOPS performance of local disks used by Ir7n ECSs

Flavor

Maximum IOPS for Random 4 KB Read

ir7n.large.4

28,125

ir7n.xlarge.4

56,250

ir7n.2xlarge.4

112,500

ir7n.4xlarge.4

225,000

ir7n.8xlarge.4

450,000

ir7n.16xlarge.4

900,000

Table 7 and Table 8 list the IOPS performance of local disks and specifications of a single local disk attached to an I7n ECS.

Table 7 IOPS performance of local disks used by I7n ECSs

Flavor

Maximum IOPS for Random 4 KB Read

i7n.2xlarge.4

900,000

i7n.4xlarge.4

1,800,000

i7n.8xlarge.4

3,600,000

i7n.12xlarge.4

5,400,000

i7n.16xlarge.4

7,200,000

i7n.24xlarge.4

10,800,000

Table 8 Specifications of a single local disk attached to an I7n ECS

Metric

Performance

Disk capacity

1.6 TB

IOPS for random 4 KB read

900,000

IOPS for random 4 KB write

250,000

Read throughput

6.2 GiB/s

Write throughput

2.1 GiB/s

Access latency

Within microseconds

Table 9 and Table 10 list the IOPS performance of local disks and specifications of a single local disk attached to an I3 ECS.

Table 9 IOPS performance of local disks used by I3 ECSs

Flavor

Maximum IOPS for Random 4 KB Read

i3.2xlarge.8

750,000

i3.4xlarge.8

1,500,000

i3.8xlarge.8

3,000,000

i3.12xlarge.8

4,500,000

i3.15xlarge.8

5,250,000

i3.16xlarge.8

6,000,000

Table 10 Specifications of a single I3 local disk

Metric

Performance

Disk capacity

1.6 TB

IOPS for random 4 KB read

750,000

IOPS for random 4 KB write

200,000

Read throughput

2.9 GiB/s

Write throughput

1.9 GiB/s

Access latency

Within microseconds

Notes

  • Ultra-high I/O ECSs support the following OSs:
    • EulerOS 2.2
    • CentOS 7.2
    • CentOS 7.3
    • Ubuntu Server 16.04
    • SUSE Linux Enterprise Server 12 SP2
    • Fedora 25 64bit
    • OpenSUSE 42.2 64bit

    EulerOS 2.2 and Ubuntu Server 16.04 are recommended.

  • If the host where an ultra-high I/O ECS is deployed is faulty, the ECS cannot be restored through live migration.
    • If the host is faulty or subhealthy, you need to stop the ECS for hardware repair.
    • In case of system maintenance or hardware faults, the ECS will be redeployed (to ensure HA) and cold migrated to another host. The local disk data of the ECS will not be retained.
  • Ultra-high I/O ECSs do not support specifications change.
  • Ultra-high I/O ECSs do not support local disk snapshots or backups.
  • Ultra-high I/O ECSs can use local disks, and can also have EVS disks attached to provide a larger storage size. Note the following when using the two types of storage media:
    • Only an EVS disk, not a local disk, can be used as the system disk of an ultra-high I/O ECS.
    • Both EVS disks and local disks can be used as data disks of an ultra-high I/O ECS.
    • An ultra-high I/O ECS can have a maximum of 60 attached disks (including VBD, SCSI, and local disks).
  • Modify the fstab file to set automatic disk mounting at ECS start. For details, see "Configuring Automatic Mounting at System Start" in the Elastic Cloud Server User Guide.
  • The local disk data of an ultra-high I/O ECS if an exception occurs, such as physical server breakdown or local disk damage. If your application does not use the data reliability architecture, it is a good practice to use EVS disks to build your ECS.
  • When an ultra-high I/O ECS is deleted, the data on local NVMe SSDs will also be automatically deleted, which can take some time. As a result, an ultra-high I/O ECS takes a longer time than other ECSs to be deleted. Back up the data before deleting such an ECS.
  • The data reliability of local disks depends on the reliability of physical servers and hard disks, which are SPOF-prone. It is a good practice to use data redundancy mechanisms at the application layer to ensure data availability. Use EVS disks to store service data that needs to be stored for a long time.
  • The device name of a local disk attached to an ultra-high I/O ECS is /dev/nvme0n1 or /dev/nvme0n2.
  • Local disks attached to Ir3 ECSs can be split for multiple ECSs to use. If a local disk is damaged, the ECSs that use this disk will be affected.

    You are advised to add Ir3 ECSs to an ECS group during the creation process to prevent such failures. For details, see "Managing ECS Groups" in the Elastic Cloud Server User Guide.

  • The basic resources, including vCPUs, memory, and image of an ultra-high I/O ECS will continue to be billed after the ECS is stopped. To stop the ECS from being billed, delete it and its associated resources.