Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Situation Awareness
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
Software Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive

CCE AI Suite (NVIDIA GPU)

Updated on 2025-02-18 GMT+08:00

Introduction

NVIDIA GPU is a device management add-on that supports GPUs in containers. To use GPU nodes in a cluster, this add-on must be installed.

Notes and Constraints

  • The driver to be downloaded must be a .run file.
  • Only NVIDIA Tesla drivers are supported, not GRID drivers.
  • When installing or reinstalling the add-on, ensure that the driver download address is correct and accessible. CCE does not verify the address validity.
  • The gpu-beta add-on only enables you to download the driver and execute the installation script. The add-on status only indicates that how the add-on is running, not whether the driver is successfully installed.
  • CCE does not guarantee the compatibility between the GPU driver version and the CUDA library version of your application. You need to check the compatibility by yourself.
  • If a custom OS image has had a a GPU driver installed, CCE cannot ensure that the GPU driver is compatible with other GPU components such as the monitoring components used in CCE.
  • If the version of the GPU driver you used is not included in the Supported GPU Drivers, the GPU driver may be incompatible with the OS, instance type, or container runtime. As a result, the driver installation may fail or the GPU add-on may be abnormal. If you use a customized GPU driver, verify its availability.

Installing the Add-on

  1. Log in to the CCE console and click the cluster name to access the cluster console. In the navigation pane, choose Add-ons, locate CCE AI Suite (NVIDIA GPU) on the right, and click Install.
  2. Determine whether to enable Use DCGM-Exporter to Observe DCGM Metrics. After this function is enabled, DCGM-Exporter is deployed on the GPU node.

    After DCGM-Exporter is enabled, if the collected GPU monitoring data needs to be reported to AOM, you need to install the Cloud Native Cluster Monitoring add-on and enable the function of reporting monitoring data to AOM. Then, you can choose Settings in the navigation pane, click the Monitoring tab, and enable the ServiceMonitor of DCGM-Exporter. GPU metrics reported to AOM are custom metrics and will be billed on a pay-per-use basis. For details, see Price Calculator.
    NOTICE:

    If the add-on version is 2.7.40 or later, DCGM-Exporter can be deployed. DCGM-Exporter maintains the community capability and does not support the sharing mode or GPU virtualization.

  3. Configure the add-on parameters.

    Table 1 Add-on parameters

    Parameter

    Description

    Default Cluster Driver

    All GPU nodes in a cluster use the same driver. You can select a proper GPU driver version or customize the driver link and enter the download link of the NVIDIA driver.
    NOTICE:
    • If the download link is a public network address, for example, https://us.download.nvidia.com/tesla/470.103.01/NVIDIA-Linux-x86_64-470.103.01.run, bind an EIP to each GPU node. For details about how to obtain the driver link, see Obtaining the Driver Link from Public Network.
    • If the download link is an OBS URL, you do not need to bind an EIP to GPU nodes. For details about how to obtain the driver link, see Obtaining the Driver Link from OBS.
    • Ensure that the NVIDIA driver version matches the GPU node. For details about the version mapping, see Supported GPU Drivers.
    • If the driver version is changed, restart the node to apply the change.
    • Use driver version 470 or later for Huawei Cloud EulerOS 2.0 on which Linux Kernel 5.x is built, and driver 515 or later for Ubuntu 22.04.
    NOTE:

    After the add-on is installed, you can configure GPU virtualization and node pool drivers on the Heterogeneous Resources tab in Settings.

  4. Click Install.

    NOTE:

    If the add-on is uninstalled, GPU pods newly scheduled to the nodes cannot run properly, but GPU pods already running on the nodes will not be affected.

Verifying the Add-on

After the add-on is installed, run the nvidia-smi command on the GPU node and the container that schedules GPU resources to verify the availability of the GPU device and driver.

  • GPU node:
    # If the add-on version is earlier than 2.0.0, run the following command:
    cd /opt/cloud/cce/nvidia/bin && ./nvidia-smi
    
    # If the add-on version is 2.0.0 or later and the driver installation path is changed, run the following command:
    cd /usr/local/nvidia/bin && ./nvidia-smi
  • Container:
    cd /usr/local/nvidia/bin && ./nvidia-smi

If GPU information is returned, the device is available and the add-on has been installed.

Supported GPU Drivers

NOTICE:
  • The list of supported GPU drivers applies only to GPU add-ons of 1.2.28 and later versions.
  • If you want to use the latest GPU driver, upgrade your GPU add-on to the latest version.
Table 2 Supported GPU drivers

GPU Model

Supported Cluster Type

Specification

OS

Huawei Cloud EulerOS 2.0 (GPU Virtualization Supported)

Ubuntu 22.04.4

Ubuntu 22.04.3

CentOS Linux release 7.6

EulerOS release 2.9

EulerOS release 2.5

Ubuntu 18.04 (end of maintenance)

EulerOS release 2.3 (end of maintenance)

Tesla T4

CCE standard cluster

g6

pi2

535.54.03

510.47.03

470.57.02

535.161.08

535.54.03

470.141.03

535.54.03

470.141.03

535.54.03

470.141.03

535.54.03

470.141.03

470.141.03

470.141.03

Volta V100

CCE standard cluster

p2s

p2vs

p2v

535.54.03

510.47.03

470.57.02

535.161.08

535.54.03

470.141.03

535.54.03

470.141.03

535.54.03

470.141.03

535.54.03

470.141.03

470.141.03

470.141.03

Obtaining the Driver Link from Public Network

  1. Log in to the CCE console.
  2. Create a node. In the Specifications area, select the GPU node flavor. The GPU card models are displayed in the lower part of the area.

    Figure 1 Viewing the GPU card model

  1. Log in to the NVIDIA driver download page and search for the driver information. The OS must be Linux 64-bit.

    Figure 2 Selecting parameters

  2. After confirming the driver information, click Find. On the displayed page, find the driver to be downloaded and click View.

    Figure 3 Viewing the driver information

  3. Click Download and copy the download link.

    Figure 4 Obtaining the link

Obtaining the Driver Link from OBS

  1. Upload the driver to OBS and set the driver file to public read. For details, see Uploading an Object.

    NOTE:

    When the node is restarted, the driver will be downloaded and installed again. Ensure that the OBS bucket link of the driver is valid.

  2. In the bucket list, click a bucket name, and then the Overview page of the bucket is displayed.
  3. In the navigation pane, choose Objects.
  4. Select the name of the target object and copy the driver link on the object details page.

    Figure 5 Copying an OBS link

Components

Table 3 Add-on components

Component

Description

Resource Type

nvidia-driver-installer

A workload for installing the NVIDIA GPU driver on a node, which only uses resources during the installation process (Once the installation is finished, no resources are used.)

DaemonSet

nvidia-gpu-device-plugin

A Kubernetes device plugin that provides NVIDIA GPU heterogeneous compute for containers

DaemonSet

nvidia-operator

A component that provides NVIDIA GPU node management capabilities for clusters

Deployment

GPU Metrics

For more details, see GPU Metrics.

Change History

Table 4 Release history

Add-on Version

Supported Cluster Version

New Feature

2.7.42

v1.28

v1.29

v1.30

v1.31

The NVIDIA 535.216.03 driver is added to support xGPUs.

2.7.41

v1.28

v1.29

v1.30

v1.31

The NVIDIA 535.216.03 driver is added to support xGPUs.

2.7.40

v1.28

v1.29

v1.30

v1.31

Integrated with DCGM-Exporter to observe the DCGM metrics of NVIDIA GPU nodes in clusters.

2.7.19

v1.28

v1.29

v1.30

Fixed the nvidia-container-toolkit CVE-2024-0132 container escape vulnerability.

2.7.13

v1.28

v1.29

v1.30

  • Supported xGPU configuration by node pool.
  • Supported GPU rendering.
  • Clusters 1.30 are supported.

2.6.4

v1.28

v1.29

Updated the isolation logic of GPU cards.

2.6.1

v1.28

v1.29

Upgraded the base images of the add-on.

2.5.6

v1.28

Fixed an issue that occurred during the installation of the driver.

2.5.4

v1.28

Clusters 1.28 are supported.

2.1.24

v1.21

v1.23

v1.25

v1.27

Added xGPU data to GPU basic metrics.

2.1.14

v1.21

v1.23

v1.25

v1.27

Fixed the nvidia-container-toolkit CVE-2024-0132 container escape vulnerability.

2.1.8

v1.21

v1.23

v1.25

v1.27

Fixed some issues.

2.0.69

v1.21

v1.23

v1.25

v1.27

Upgraded the base images of the add-on.

2.0.46

v1.21

v1.23

v1.25

v1.27

  • Supported Nvidia driver 535.
  • Non-root users can use xGPUs.
  • Optimized startup logic.

2.0.18

v1.21

v1.23

v1.25

v1.27

Supported Huawei Cloud EulerOS 2.0.

1.2.28

v1.19

v1.21

v1.23

v1.25

  • Adapted to Ubuntu 22.04.
  • Optimized the automatic mounting of the GPU driver directory.

1.2.24

v1.19

v1.21

v1.23

v1.25

  • Enabled a node pool to configure GPU driver versions.
  • Supported GPU metric collection.

1.2.20

v1.19

v1.21

v1.23

v1.25

Set the add-on alias to gpu.

1.2.17

v1.15

v1.17

v1.19

v1.21

v1.23

Added the nvidia-driver-install pod limits configuration.

1.2.15

v1.15

v1.17

v1.19

v1.21

v1.23

CCE clusters 1.23 are supported.

1.2.11

v1.15

v1.17

v1.19

v1.21

Supported EulerOS 2.10.

1.2.10

v1.15

v1.17

v1.19

v1.21

CentOS supports the GPU driver of the new version.

1.2.9

v1.15

v1.17

v1.19

v1.21

CCE clusters 1.21 are supported.

1.2.2

v1.15

v1.17

v1.19

Supported the new EulerOS kernel.

1.2.1

v1.15

v1.17

v1.19

  • CCE clusters 1.19 are supported.
  • Added taint tolerance configuration.

1.1.13

v1.13

v1.15

v1.17

Supported kernel-3.10.0-1127.19.1.el7.x86_64 for CentOS 7.6.

1.1.11

v1.15

v1.17

  • Allowed users to customize driver addresses to download drivers.
  • Clusters 1.15 and 1.17 are supported.

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback