GPU-accelerated ECSs
GPU-accelerated ECSs provide outstanding floating-point computing capabilities. They are suitable for applications that require real-time, highly concurrent massive computing.
GPU-accelerated ECSs
- P series
|
Type |
Series |
GPU |
CUDA Cores per GPU |
Single-GPU Performance |
Application |
|---|---|---|---|---|---|
|
Computing-accelerated |
P2s |
NVIDIA V100 |
5,120 |
|
AI deep learning training, scientific computing, computational fluid dynamics, computational finance, seismic analysis, molecular modeling, and genomics. |
|
Inference-accelerated |
Pi2 |
NVIDIA T4 (GPU passthrough) |
2,560 |
|
Machine learning, deep learning, inference training, scientific computing, seismic analysis, computing finance, rendering, multimedia encoding and decoding |
|
Inference-accelerated |
Pi2nl |
NVIDIA P4 (GPU passthrough) |
2,560 |
|
Machine learning, deep learning, inference training, scientific computing, seismic analysis, computing finance, rendering, multimedia encoding and decoding |
Images Supported by GPU-accelerated ECSs
|
Type |
Series |
Supported Image |
|---|---|---|
|
Computing-accelerated |
P3 |
|
|
Computing-accelerated |
P2s |
|
|
Inference-accelerated |
Pi2 |
|
|
Inference-accelerated |
Pi2nl |
|
Computing-accelerated P3
Overview
P3 ECSs use NVIDIA A100 GPUs and provide flexibility and ultra-high-performance computing. P3 ECSs have strengths in AI-based deep learning, scientific computing, Computational Fluid Dynamics (CFD), computing finance, seismic analysis, molecular modeling, and genomics. Theoretically, the FP32 is 19.5 TFLOPS, and the TF32 tensor core is 156 TFLOPS | 312 TFLOPS (sparsity enabled).
Specifications
|
Flavor |
vCPUs |
Memory (GiB) |
Max./Assured Network Bandwidth (Gbit/s) |
Max. Network PPS (10,000) |
Max. NIC Queues |
Max. NICs |
GPUs |
GPU Memory (GiB) |
Virtualization |
|---|---|---|---|---|---|---|---|---|---|
|
p3.2xlarge.8 |
8 |
64 |
10/4 |
100 |
4 |
4 |
1 × NVIDIA A100 80 GB |
80 |
KVM |
|
p3.4xlarge.8 |
16 |
128 |
15/8 |
200 |
8 |
8 |
2 × NVIDIA A100 80 GB |
160 |
KVM |
|
p3.8xlarge.8 |
32 |
256 |
25/15 |
350 |
16 |
8 |
4 × NVIDIA A100 80 GB |
320 |
KVM |
|
p3.16xlarge.8 |
64 |
512 |
36/30 |
700 |
32 |
8 |
8 × NVIDIA A100 80 GB |
640 |
KVM |
P3 ECS Features
- CPU: 2nd Generation Intel® Xeon® Scalable 6248R processors and 3.0 GHz of basic frequency
- Up to eight NVIDIA A100 GPUs on an ECS
- NVIDIA CUDA parallel computing and common deep learning frameworks, such as TensorFlow, Caffe, PyTorch, and MXNet
- 19.5 TFLOPS of single-precision computing and 9.7 TFLOPS of double-precision computing on a single GPU
- NVIDIA Tensor cores with 156 TFLOPS of single- and double-precision computing for deep learning
- Up to 40 Gbit/s of network bandwidth on a single ECS
- 80 GB HBM2 GPU memory per graphics card, with a bandwidth of 1,935 Gbit/s
- Comprehensive basic capabilities
- User-defined network with flexible subnet division and network access policy configuration
- Mass storage, elastic expansion, and backup and restoration
- Elastic scaling
- Flexibility
Similar to other types of ECSs, P3 ECSs can be provisioned in a few minutes.
- Excellent supercomputing ecosystem
The supercomputing ecosystem allows you to build up a flexible, high-performance, cost-effective computing platform. A large number of HPC applications and deep-learning frameworks can run on P3 ECSs.
Supported Software
P3 ECSs are used in computing acceleration scenarios, such as deep learning training, inference, scientific computing, molecular modeling, and seismic analysis. If the software is required to support GPU CUDA, use P3 ECSs. P3 ECSs support the following commonly used software:
- Common deep learning frameworks, such as TensorFlow, Spark, PyTorch, MXNet, and Caffe
- CUDA GPU rendering supported by RedShift for Autodesk 3ds Max and V-Ray for 3ds Max
- Agisoft PhotoScan
- MapD
- More than 2,000 GPU-accelerated applications such as Amber, NAMD, and VASP
Notes
- After a P3 ECS is stopped, basic resources (including vCPUs, memory, image, and GPUs) are not billed, but its system disk is billed based on the disk capacity. If other products, such as EVS disks, EIP, and bandwidth are associated with the ECS, these products are billed separately.
Resources will be released after a P3 ECS is stopped. If resources are insufficient at the next start, the start may fail. If you want to use such an ECS for a long period of time, do not stop the ECS.
- If a P3 ECS is created using a private image, make sure that the Tesla driver was installed during the private image creation. If not, install the driver for computing acceleration after the ECS is created. For details, see Manually Installing a Tesla Driver on a GPU-accelerated ECS.
- GPU-accelerated ECSs differ greatly in general-purpose and heterogeneous compute. Their specifications can only be changed to other specifications of the same instance type.
- GPU-accelerated ECSs do not support live migration.
Computing-accelerated P2s
Overview
P2s ECSs use NVIDIA Tesla V100 GPUs to provide flexibility, high-performance computing, and cost-effectiveness. P2s ECSs provide outstanding general computing capabilities and have strengths in AI-based deep learning, scientific computing, Computational Fluid Dynamics (CFD), computing finance, seismic analysis, molecular modeling, and genomics.
Specifications
|
Flavor |
vCPUs |
Memory (GiB) |
Max./Assured Network Bandwidth (Gbit/s) |
Max. Network PPS (10,000) |
Max. NIC Queues |
Max. NICs |
GPUs |
GPU Connection |
GPU Memory (GiB) |
Virtualization |
|---|---|---|---|---|---|---|---|---|---|---|
|
p2s.2xlarge.8 |
8 |
64 |
10/4 |
50 |
4 |
4 |
1 × V100 |
PCIe Gen3 |
1 × 32 GiB |
KVM |
|
p2s.4xlarge.8 |
16 |
128 |
15/8 |
100 |
8 |
8 |
2 × V100 |
PCIe Gen3 |
2 × 32 GiB |
KVM |
|
p2s.8xlarge.8 |
32 |
256 |
25/15 |
200 |
16 |
8 |
4 × V100 |
PCIe Gen3 |
4 × 32 GiB |
KVM |
|
p2s.16xlarge.8 |
64 |
512 |
30/30 |
400 |
32 |
8 |
8 × V100 |
PCIe Gen3 |
8 × 32 GiB |
KVM |
- CPU: 2nd Generation Intel® Xeon® Scalable 6278 processors (2.6 GHz of basic frequency and 3.5 GHz of turbo frequency), or Intel® Xeon® Scalable 6151 processors (3.0 GHz of basic frequency and 3.4 GHz of turbo frequency)
- Up to eight NVIDIA Tesla V100 GPUs on an ECS
- NVIDIA CUDA parallel computing and common deep learning frameworks, such as TensorFlow, Caffe, PyTorch, and MXNet
- 14 TFLOPS of single-precision computing and 7 TFLOPS of double-precision computing
- NVIDIA Tensor cores with 112 TFLOPS of single- and double-precision computing for deep learning
- Up to 30 Gbit/s of network bandwidth on a single ECS
- 32 GiB of HBM2 GPU memory with a bandwidth of 900 Gbit/s
- Comprehensive basic capabilities
- User-defined network with flexible subnet division and network access policy configuration
- Mass storage, elastic expansion, and backup and restoration
- Elastic scaling
- Flexibility
Similar to other types of ECSs, P2s ECSs can be provisioned in a few minutes.
- Excellent supercomputing ecosystem
The supercomputing ecosystem allows you to build up a flexible, high-performance, cost-effective computing platform. A large number of HPC applications and deep-learning frameworks can run on P2s ECSs.
Supported Software
- Common deep learning frameworks, such as TensorFlow, Caffe, PyTorch, and MXNet
- CUDA GPU rendering supported by RedShift for Autodesk 3ds Max and V-Ray for 3ds Max
- Agisoft PhotoScan
- MapD
- After a P2s ECS is stopped, basic resources (including vCPUs, memory, image, and GPUs) are not billed, but its system disk is billed based on the disk capacity. If other products, such as EVS disks, EIP, and bandwidth are associated with the ECS, these products are billed separately.
Resources will be released after a P2s ECS is stopped. If resources are insufficient at the next start, the start may fail. If you want to use such an ECS for a long period of time, do not stop the ECS.
- By default, P2s ECSs created using a public image have the Tesla driver installed.
- If a P2s ECS is created using a private image, make sure that the Tesla driver was installed during the private image creation. If not, install the driver for computing acceleration after the ECS is created. For details, see Manually Installing a Tesla Driver on a GPU-accelerated ECS.
- GPU-accelerated ECSs differ greatly in general-purpose and heterogeneous compute. Their specifications can only be changed to other specifications of the same instance type.
- GPU-accelerated ECSs do not support live migration.
Computing-accelerated P3snl
Overview
P3snl ECSs use NVIDIA A100 GPUs and provide flexibility and ultra-high-performance computing. P3snl ECSs have strengths in AI-based deep learning, scientific computing, Computational Fluid Dynamics (CFD), computing finance, seismic analysis, molecular modeling, and genomics. Theoretically, the FP32 is 19.5 TFLOPS, and the TF32 tensor core is 156 TFLOPS | 312 TFLOPS (sparsity enabled).
Specifications
|
Flavor |
vCPUs |
Memory (GiB) |
Max./Assured Network Bandwidth (Gbit/s) |
Max. Network PPS (10,000) |
Max. NIC Queues |
Max. NICs |
GPUs |
GPU Connection |
GPU Memory (GiB) |
Virtualization |
|---|---|---|---|---|---|---|---|---|---|---|
|
p3snl.2xlarge.8 |
8 |
64 |
10/4 |
100 |
4 |
4 |
1 × NVIDIA A100 40GB |
PCIe Gen3 |
1 × 40 GiB |
KVM |
|
p3snl.4xlarge.8 |
16 |
128 |
15/8 |
200 |
8 |
8 |
2 × NVIDIA A100 40GB |
PCIe Gen3 |
2 × 40 GiB |
KVM |
|
p3snl.8xlarge.8 |
32 |
256 |
25/15 |
350 |
16 |
8 |
4 × NVIDIA A100 40GB |
PCIe Gen3 |
4 × 40 GiB |
KVM |
|
p3snl.16xlarge.8 |
64 |
512 |
30/30 |
700 |
32 |
8 |
8 × NVIDIA A100 40GB |
PCIe Gen3 |
8 × 40 GiB |
KVM |
- CPU: 2nd Generation Intel® Xeon® Scalable 6248R processors and 3.0 GHz of basic frequency
- Up to eight NVIDIA A100 GPUs on an ECS
- NVIDIA CUDA parallel computing and common deep learning frameworks, such as TensorFlow, Caffe, PyTorch, and MXNet
- 19.5 TFLOPS of single-precision computing and 9.7 TFLOPS of double-precision computing on a single GPU
- NVIDIA Tensor cores with 156 TFLOPS of single- and double-precision computing for deep learning
- Up to 40 Gbit/s of network bandwidth on a single ECS
- 40 GiB of HBM2 GPU memory with a bandwidth of 1,935 Gbit/s
- Comprehensive basic capabilities
- User-defined network with flexible subnet division and network access policy configuration
- Mass storage, elastic expansion, and backup and restoration
- Elastic scaling
- Flexibility
Similar to other types of ECSs, P2s ECSs can be provisioned in a few minutes.
- Excellent supercomputing ecosystem
The supercomputing ecosystem allows you to build up a flexible, high-performance, cost-effective computing platform. A large number of HPC applications and deep-learning frameworks can run on P3snl ECSs.
Supported Software
- Common deep learning frameworks, such as TensorFlow, Caffe, PyTorch, and MXNet
- CUDA GPU rendering supported by RedShift for Autodesk 3ds Max and V-Ray for 3ds Max
- Agisoft PhotoScan
- MapD
- More than 2,000 GPU-accelerated applications such as Amber, NAMD, and VASP
- After a P3snl ECS is stopped, basic resources (including vCPUs, memory, image, and GPUs) are not billed, but its system disk is billed based on the disk capacity. If other products, such as EVS disks, EIP, and bandwidth are associated with the ECS, these products are billed separately.
Resources will be released after a P3snl ECS is stopped. If resources are insufficient at the next start, the start may fail. If you want to use such an ECS for a long period of time, do not stop the ECS.
- By default, P3snl ECSs created using a public image have the Tesla driver installed.
- If a P3snl ECS is created using a private image, make sure that the Tesla driver was installed during the private image creation. If the Tesla driver has not been installed, install the driver for computing acceleration after the ECS is created. For details, see Manually Installing a Tesla Driver on a GPU-accelerated ECS.
- GPU-accelerated ECSs differ greatly in general-purpose and heterogeneous compute. Their specifications can only be changed to other specifications of the same instance type.
- GPU-accelerated ECSs do not support live migration.
Inference-accelerated Pi2
Overview
Pi2 ECSs use NVIDIA Tesla T4 GPUs dedicated for real-time AI inference. These ECSs use the T4 INT8 calculator for up to 130 TOPS of INT8 computing. The Pi2 ECSs can also be used for light-load training.
Specifications
|
Flavor |
vCPUs |
Memory (GiB) |
Max./Assured Network Bandwidth (Gbit/s) |
Max. Network PPS (10,000) |
Max. NIC Queues |
GPUs |
GPU Memory (GiB) |
Local Disks |
Virtualization |
|---|---|---|---|---|---|---|---|---|---|
|
pi2.2xlarge.4 |
8 |
32 |
10/4 |
50 |
4 |
1 × T4 |
1 × 16 |
- |
KVM |
|
pi2.4xlarge.4 |
16 |
64 |
15/8 |
100 |
8 |
2 × T4 |
2 × 16 |
- |
KVM |
|
pi2.8xlarge.4 |
32 |
128 |
25/15 |
200 |
16 |
4 × T4 |
4 × 16 |
- |
KVM |
|
pi2.16xlarge.4 |
64 |
256 |
30/30 |
400 |
32 |
8 × T4 |
8 × 16 |
- |
KVM |
Pi2 ECS Features
- CPU: 2nd Generation Intel® Xeon® Scalable 6278 processors (2.6 GHz of basic frequency and 3.5 GHz of turbo frequency), or Intel® Xeon® Scalable 6151 processors (3.0 GHz of basic frequency and 3.4 GHz of turbo frequency)
- Up to four NVIDIA Tesla T4 GPUs on an ECS
- GPU hardware passthrough
- Up to 8.1 TFLOPS of single-precision computing on a single GPU
- Up to 130 TOPS of INT8 computing on a single GPU
- 16 GiB of GDDR6 GPU memory with a bandwidth of 320 GiB/s on a single GPU
- Built-in one NVENC and two NVDECs
Supported Software
Pi2 ECSs are used in GPU-based inference computing scenarios, such as image recognition, speech recognition, and natural language processing. The Pi2 ECSs can also be used for light-load training.
Pi2 ECSs support the following commonly used software:
- Deep learning frameworks, such as TensorFlow, Caffe, PyTorch, and MXNet
Notes
- After a Pi2 ECS is stopped, basic resources (including vCPUs, memory, image, and GPUs) are not billed, but its system disk is billed based on the disk capacity. If other products, such as EVS disks, EIP, and bandwidth are associated with the ECS, these products are billed separately.
Resources will be released after a Pi2 ECS is stopped. If resources are insufficient at the next start, the start may fail. If you want to use such an ECS for a long period of time, do not stop the ECS.
- Pi2 ECSs support automatic recovery when the hosts accommodating such ECSs become faulty.
- By default, Pi2 ECSs created using a public image have the Tesla driver installed.
- If a Pi2 ECS is created using a private image, make sure that the Tesla driver was installed during the private image creation. If not, install the driver for computing acceleration after the ECS is created. For details, see Manually Installing a Tesla Driver on a GPU-accelerated ECS.
- GPU-accelerated ECSs differ greatly in general-purpose and heterogeneous compute. Their specifications can only be changed to other specifications of the same instance type.
- GPU-accelerated ECSs do not support live migration.
Inference-accelerated Pi2nl
Overview
Pi2nl ECSs use NVIDIA Tesla T4 GPUs dedicated for real-time AI inference. These ECSs use the T4 INT8 calculator for up to 130 TOPS of INT8 computing. The Pi2nl ECSs can also be used for light-workload training.
|
Flavor |
vCPUs |
Memory (GiB) |
Max./Assured Network Bandwidth (Gbit/s) |
Max. Network PPS (10,000) |
Max. NIC Queues |
Max. NICs |
GPUs |
GPU Memory (GiB) |
Local Disks |
Virtualization |
|---|---|---|---|---|---|---|---|---|---|---|
|
pi2nl.2xlarge.4 |
8 |
32 |
10/4 |
50 |
4 |
4 |
1 × T4 |
1 × 16 |
- |
KVM |
|
pi2nl.4xlarge.4 |
16 |
64 |
15/8 |
100 |
8 |
8 |
2 × T4 |
2 × 16 |
- |
KVM |
|
pi2nl.8xlarge.4 |
32 |
128 |
25/15 |
200 |
16 |
8 |
4 × T4 |
4 × 16 |
- |
KVM |
|
pi2nl.16xlarge.4 |
64 |
256 |
30/30 |
400 |
32 |
8 |
8 × T4 |
8 × 16 |
- |
KVM |
Pi2nl ECS Features
- CPU: 2nd Generation Intel® Xeon® Scalable 6278 processors (2.6 GHz of basic frequency and 3.5 GHz of turbo frequency), or Intel® Xeon® Scalable 6151 processors (3.0 GHz of basic frequency and 3.4 GHz of turbo frequency)
- Up to four NVIDIA Tesla T4 GPUs on an ECS
- GPU hardware passthrough
- Up to 8.1 TFLOPS of single-precision computing on a single GPU
- Up to 130 TOPS of INT8 computing on a single GPU
- 16 GiB of GDDR6 GPU memory with a bandwidth of 320 GiB/s on a single GPU
- Built-in one NVENC and two NVDECs
Supported Software
Pi2nl ECSs are used in GPU-based inference computing scenarios, such as image recognition, speech recognition, and natural language processing. The Pi2nl ECSs can also be used for light-load training.
Pi2 ECSs support the following commonly used software:
- Deep learning frameworks, such as TensorFlow, Caffe, PyTorch, and MXNet
Notes
- After a Pi2nl ECS is stopped, basic resources (including vCPUs, memory, image, and GPUs) are not billed, but its system disk is billed based on the disk capacity. If other products, such as EVS disks, EIP, and bandwidth are associated with the ECS, these products are billed separately.
Resources will be released after a Pi2nl ECS is stopped. If resources are insufficient at the next start, the start may fail. If you want to use such an ECS for a long period of time, do not stop the ECS.
- Pi2nl ECSs support automatic recovery when the hosts accommodating such ECSs become faulty.
- By default, Pi2nl ECSs created using a public image have the Tesla driver installed.
- If a Pi2nl ECS is created using a private image, make sure that the Tesla driver was installed during the private image creation. If the Tesla driver has not been installed, install the driver for computing acceleration after the ECS is created. For details, see Manually Installing a Tesla Driver on a GPU-accelerated ECS.
- GPU-accelerated ECSs differ greatly in general-purpose and heterogeneous compute. Their specifications can only be changed to other specifications of the same instance type.
- GPU-accelerated ECSs do not support live migration.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot