CCE AI Suite (NVIDIA GPU)
Introduction
The CCE AI Suite (NVIDIA GPU) add-on helps you use and manage GPUs in your clusters. It supports access to GPUs in containers and helps you efficiently run and maintain GPU-based compute-intensive workloads in cloud native environments. With this add-on, both CCE standard and Turbo clusters can handle GPU scheduling, install drivers automatically, manage runtimes, and monitor performance. This means you get full support for GPU workloads throughout their entire lifecycle. To run GPU nodes in a cluster, you must install this add-on.
How nvidia-gpu-device-plugin Works
nvidia-gpu-device-plugin is one of the core components of CCE AI Suite (NVIDIA GPU). As a bridge between the container platform and GPU hardware, nvidia-gpu-device-plugin abstracts physical GPUs into resources that can be identified and scheduled by the container platform. This addresses the GPU allocation and usage problems in containerized environments.

- Sending a registration request: nvidia-gpu-device-plugin sends a registration request to kubelet as a client along with:
- Device name (nvidia.com/gpu): identifies the type of hardware resource managed by the add-on for kubelet to identify and schedule.
- Unix socket: enables local gRPC communication between the component and kubelet to ensure that kubelet can properly call the correct services.
- API version: specifies the version of the Device Plugin API protocol. It ensures that the communication protocols of both parties are compatible.
- Starting a service: After registration, nvidia-gpu-device-plugin starts a gRPC server to provide services for external systems. The gRPC server handles kubelet requests, including device list queries, health status reporting, and resource allocation. The listening address (Unix socket path) of the gRPC server and supported Device Plugin API version have been reported to kubelet during registration. This ensures that kubelet can properly establish connections and call the correct APIs based on the registration information.
- Health monitoring: After the gRPC server is started, kubelet establishes a persistent connection with nvidia-gpu-device-plugin through the ListAndWatch API to continuously listen to the device IDs and their health. If a device becomes unhealthy, nvidia-gpu-device-plugin reports the error to kubelet through the connection.
- Information reporting: kubelet integrates the device information into the node statuses and reports resource details such as the number of devices to Kubernetes API server. The scheduler (kube-scheduler or Volcano) uses these details to make scheduling decisions.
- Persistent storage: CCE stores the GPU device information (such as quantity and status) reported by nodes in etcd for cluster-level resource persistence. This ensures that GPU data can be kept after a cluster component is faulty or restarted, and provides consistent data sources for components such as the scheduler and controller, ensuring reliability of resource scheduling and management.
Notes and Constraints
- The driver to be downloaded must be a .run file.
- Only NVIDIA Tesla drivers are supported, not GRID drivers.
- When installing or reinstalling the add-on, ensure that the driver download address is correct and accessible. CCE does not verify the address validity.
- This add-on only enables you to download the driver and execute the installation script. The add-on status only indicates that how the add-on is running, not whether the driver is successfully installed.
- CCE does not guarantee the compatibility between the GPU driver version and the CUDA library version of your application. You need to check the compatibility by yourself.
- If a custom OS image has had a GPU driver installed, CCE cannot ensure that the GPU driver is compatible with other GPU components such as the monitoring components used in CCE.
- If the version of the GPU driver you used is not included in the Supported GPU Drivers, the GPU driver may be incompatible with the OS, ECS type, or container runtime. As a result, the driver installation may fail or the CCE AI Suite (NVIDIA GPU) add-on may be abnormal. If you use a customized GPU driver, verify its availability.
Installing the Add-on
- Log in to the CCE console and click the cluster name to access the cluster console.
- In the navigation pane, choose Add-ons. In the right pane, find the CCE AI Suite (NVIDIA GPU) add-on and click Install.
- Determine whether to enable Use DCGM-Exporter to Observe DCGM Metrics. After this function is enabled, DCGM-Exporter is deployed on the GPU node.
If the add-on version is 2.7.40 or later, DCGM-Exporter can be deployed. DCGM-Exporter maintains the community capability and does not support the sharing mode or GPU virtualization.
After DCGM-Exporter is enabled, if you need to report the collected GPU monitoring data to AOM, see Comprehensive Monitoring of DCGM Metrics.
- Configure the add-on parameters.
Table 1 Add-on parameters Parameter
Description
Default Cluster Driver
All GPU nodes in a cluster use the same driver. You can select a proper GPU driver version or customize the driver link and enter the download link of the NVIDIA driver.NOTICE:- If the download link is a public network address, for example, https://us.download.nvidia.com/tesla/470.103.01/NVIDIA-Linux-x86_64-470.103.01.run, bind an EIP to each GPU node. For details about how to obtain the driver link, see Internet.
- If the download link is an OBS URL, you do not need to bind an EIP to GPU nodes. For details about how to obtain the driver link, see OBS Link.
- Ensure that the NVIDIA driver version matches the GPU node. For details about the version mapping, see Supported GPU Drivers.
- If the driver version is changed, restart the node to apply the change.
- Use driver version 470 or later for Huawei Cloud EulerOS 2.0 on which Linux Kernel 5.x is built, and driver 515 or later for Ubuntu 22.04.
After the add-on is installed, you can configure GPU virtualization and node pool drivers on the Heterogeneous Resources tab in Settings.
- Click Install.
If the add-on is uninstalled, GPU pods newly scheduled to the nodes cannot run properly, but GPU pods already running on the nodes will not be affected.
Verifying the Add-on
After the add-on is installed, run the nvidia-smi command on the GPU node and the container that schedules GPU resources to verify the availability of the GPU device and driver.
- GPU node:
- If the add-on version is earlier than 2.0.0, run the following command:
cd /opt/cloud/cce/nvidia/bin && ./nvidia-smi
- If the add-on version is 2.0.0 or later, run the following command:
cd /usr/local/nvidia/bin && ./nvidia-smi
- If the add-on version is earlier than 2.0.0, run the following command:
- Container:
- If the cluster version is v1.27 or earlier, run the following command:
cd /usr/local/nvidia/bin && ./nvidia-smi
- If the cluster version is v1.28 or later, run the following command:
cd /usr/bin && ./nvidia-smi
- If the cluster version is v1.27 or earlier, run the following command:
If GPU information is returned, the device is available and the add-on has been installed.
Managing the Add-on
Once the add-on is installed, you can upgrade or roll back it as needed. Before upgrading or rolling back the CCE AI Suite (NVIDIA GPU) add-on, make sure there are no GPU virtualization workloads running on the GPU node. If the GPU node has GPU virtualization workloads, when you upgrade or roll back the add-on, you need to drain the GPU node. For details, see How Can I Drain a GPU Node After Upgrading or Rolling Back the CCE AI Suite (NVIDIA GPU) Add-on?
Supported GPU Drivers

- The list of supported GPU drivers applies only to CCE AI Suite (NVIDIA GPU) of v1.2.28 or later.
- To use the latest GPU driver, upgrade your CCE AI Suite (NVIDIA GPU) to the latest version.
- NVIDIA no longer provides updates or security patches for GPU drivers that have reached their end of life (EOL). For details, see Driver Lifecycle. For example, a Production Branch (PB) provides one-year support from the date of release, and a Long-Term Support Branch (LTSB) provides three-year support.
According to this policy, CCE does not provide technical support for GPU drivers that have reached EOL, including driver installation and updates. The following drivers have reached EOL: 510.47.03, 470.141.03.
GPU Model |
Supported Cluster Type |
Specification |
OS |
||||||
---|---|---|---|---|---|---|---|---|---|
Huawei Cloud EulerOS 2.0 (GPU Virtualization Supported) |
Ubuntu 22.04 |
CentOS Linux release 7.6 |
EulerOS release 2.9 |
EulerOS release 2.5 |
Ubuntu 18.04 (EOM) |
EulerOS release 2.3 (EOM) |
|||
Tesla T4 |
CCE standard cluster |
g6 pi2 |
535.216.03 535.54.03 510.47.03 470.57.02 |
535.216.03 535.161.08 535.54.03 470.141.03 |
535.54.03 470.141.03 |
535.54.03 470.141.03 |
535.54.03 470.141.03 |
470.141.03 |
470.141.03 |
Tesla V100 |
CCE standard cluster |
p2s p2vs p2v |
535.216.03 535.54.03 510.47.03 470.57.02 |
535.216.03 535.161.08 535.54.03 470.141.03 |
535.54.03 470.141.03 |
535.54.03 470.141.03 |
535.54.03 470.141.03 |
470.141.03 |
470.141.03 |
Obtaining the Driver Link
When you need to install a driver using the custom driver link, the CCE AI Suite (NVIDIA GPU) add-on allows you to obtain the driver link from either the Internet or the OBS link. To obtain a driver link, take the following steps:
- Log in to the CCE console and click the cluster name to access the cluster console.
- Create a node. In the Specifications area, select the GPU node flavor. The GPU card models are displayed in the lower part of the area.
Figure 4 Viewing the GPU card model
- Log in to the NVIDIA driver download page and search for the driver information. The OS must be Linux 64-bit.
Figure 5 Selecting parameters
- After confirming the driver information, click Find. On the displayed page, find the driver to be downloaded and click View.
Figure 6 Viewing the driver information
- Click Download and copy the download link.
Figure 7 Obtaining the link
- Upload the driver to OBS and set the driver file to public read. For details, see Uploading an Object.
When the node is restarted, the driver will be downloaded and installed again. Ensure that the OBS bucket link of the driver is valid.
- In the bucket list, click a bucket name, and then the Overview page of the bucket is displayed.
- In the navigation pane, choose Objects.
- Select the name of the target object and copy the driver link on the object details page.
Figure 8 Copying an OBS link
Components
Component |
Description |
Resource Type |
---|---|---|
nvidia-driver-installer |
A workload for installing the NVIDIA GPU driver on a node, which only uses resources during the installation process (Once the installation is finished, no resources are used.) |
DaemonSet |
nvidia-gpu-device-plugin |
A Kubernetes device plugin that provides NVIDIA GPU heterogeneous compute for containers |
DaemonSet |
nvidia-operator |
A component that provides NVIDIA GPU node management capabilities for clusters |
Deployment |
dcgm-exporter |
A component that is installed when DCGM-Exporter is enabled to observe DCGM metrics. It is used to collect GPU metrics. |
DaemonSet |
Helpful Links
- CCE AI Suite (NVIDIA GPU) provides GPU monitoring metrics. For details about GPU metrics, see GPU Metrics.
- After DCGM-Exporter is enabled, if you need to report the collected GPU monitoring data to AOM, see Comprehensive Monitoring of DCGM Metrics.
- To further use GPU virtualization, see GPU Virtualization.
Release History
Add-on Version |
Supported Cluster Version |
New Feature |
---|---|---|
2.8.4 |
v1.28 v1.29 v1.30 v1.31 v1.32 |
Fixed CVE-2025-23266 and CVE-2025-23267. |
2.8.1 |
v1.28 v1.29 v1.30 v1.31 v1.32 |
Fixed some issues. |
2.7.84 |
v1.28 v1.29 v1.30 v1.31 v1.32 |
CCE clusters v1.32 are supported. |
2.7.66 |
v1.28 v1.29 v1.30 v1.31 |
Fixed some issues. |
2.7.63 |
v1.28 v1.29 v1.30 v1.31 |
Fixed the security vulnerabilities. |
2.7.47 |
v1.28 v1.29 v1.30 v1.31 |
Added the NVIDIA 535.216.03 drivers that support xGPUs. |
2.7.42 |
v1.28 v1.29 v1.30 v1.31 |
Added the NVIDIA 535.216.03 drivers that support xGPUs. |
2.7.41 |
v1.28 v1.29 v1.30 v1.31 |
Added the NVIDIA 535.216.03 drivers that support xGPUs. |
2.7.40 |
v1.28 v1.29 v1.30 v1.31 |
Integrated with DCGM-Exporter to observe the DCGM metrics of NVIDIA GPU nodes in clusters. |
2.7.19 |
v1.28 v1.29 v1.30 |
Fixed the nvidia-container-toolkit CVE-2024-0132 container escape vulnerability. |
2.7.13 |
v1.28 v1.29 v1.30 |
|
2.6.4 |
v1.28 v1.29 |
Updated the isolation logic of GPU cards. |
2.6.1 |
v1.28 v1.29 |
Upgraded the base images of the add-on. |
2.5.6 |
v1.28 |
Fixed an issue that occurred during the installation of the driver. |
2.5.4 |
v1.28 |
Clusters v1.28 are supported. |
2.2.4 |
v1.25 v1.27 |
Fixed CVE-2025-23266 and CVE-2025-23267. |
2.2.1 |
v1.25 v1.27 |
Fixed some issues. |
2.1.67 |
v1.25 v1.27 |
Supported the nvidia-peermem module. |
2.1.49 |
v1.25 v1.27 |
Added the NVIDIA 535.216.03 drivers that support xGPUs. |
2.1.47 |
v1.25 v1.27 |
Added the NVIDIA 535.216.03 drivers that support xGPUs. |
2.1.26 |
v1.21 v1.23 v1.25 v1.27 |
Added the NVIDIA 535.216.03 drivers that support xGPUs. |
2.1.14 |
v1.21 v1.23 v1.25 v1.27 |
Fixed the nvidia-container-toolkit CVE-2024-0132 container escape vulnerability. |
2.1.8 |
v1.21 v1.23 v1.25 v1.27 |
Fixed some issues. |
2.0.69 |
v1.21 v1.23 v1.25 v1.27 |
Upgraded the base images of the add-on. |
2.0.46 |
v1.21 v1.23 v1.25 v1.27 |
|
2.0.18 |
v1.21 v1.23 v1.25 v1.27 |
Supported Huawei Cloud EulerOS 2.0. |
1.2.28 |
v1.19 v1.21 v1.23 v1.25 |
|
1.2.24 |
v1.19 v1.21 v1.23 v1.25 |
|
1.2.20 |
v1.19 v1.21 v1.23 v1.25 |
Set the add-on alias to gpu. |
1.2.17 |
v1.15 v1.17 v1.19 v1.21 v1.23 |
Added the nvidia-driver-install pod limit configuration. |
1.2.15 |
v1.15 v1.17 v1.19 v1.21 v1.23 |
CCE clusters v1.23 are supported. |
1.2.11 |
v1.15 v1.17 v1.19 v1.21 |
Supported EulerOS 2.10. |
1.2.10 |
v1.15 v1.17 v1.19 v1.21 |
CentOS supports the GPU driver of the new version. |
1.2.9 |
v1.15 v1.17 v1.19 v1.21 |
CCE clusters v1.21 are supported. |
1.2.2 |
v1.15 v1.17 v1.19 |
Supported the new EulerOS kernel. |
1.2.1 |
v1.15 v1.17 v1.19 |
|
1.1.13 |
v1.13 v1.15 v1.17 |
Supported kernel-3.10.0-1127.19.1.el7.x86_64 for CentOS 7.6. |
1.1.11 |
v1.15 v1.17 |
|
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot