Updated on 2025-01-07 GMT+08:00

CCE AI Suite (NVIDIA GPU)

Introduction

NVIDIA GPU is a device management add-on that supports GPUs in containers. To use GPU nodes in a cluster, this add-on must be installed.

Notes and Constraints

  • The driver to be downloaded must be a .run file.
  • Only NVIDIA Tesla drivers are supported, not GRID drivers.
  • When installing or reinstalling the add-on, ensure that the driver download address is correct and accessible. CCE does not verify the address validity.
  • The gpu-beta add-on only enables you to download the driver and execute the installation script. The add-on status only indicates that how the add-on is running, not whether the driver is successfully installed.
  • CCE does not guarantee the compatibility between the GPU driver version and the CUDA library version of your application. You need to check the compatibility by yourself.
  • If a custom OS image has had a a GPU driver installed, CCE cannot ensure that the GPU driver is compatible with other GPU components such as the monitoring components used in CCE.
  • If the version of the GPU driver you used is not included in the Supported GPU Drivers, the GPU driver may be incompatible with the OS, instance type, or container runtime. As a result, the driver installation may fail or the GPU add-on may be abnormal. If you use a customized GPU driver, verify its availability.

Installing the Add-on

  1. Log in to the CCE console and click the cluster name to access the cluster console. In the navigation pane, choose Add-ons, locate CCE AI Suite (NVIDIA GPU) on the right, and click Install.
  2. Configure the add-on parameters.

    Table 1 Add-on parameters

    Parameter

    Description

    Default Cluster Driver

    All GPU nodes in a cluster use the same driver. You can select a proper GPU driver version or customize the driver link and enter the download link of the NVIDIA driver.
    NOTICE:
    • If the download link is a public network address, for example, https://us.download.nvidia.com/tesla/470.103.01/NVIDIA-Linux-x86_64-470.103.01.run, bind an EIP to each GPU node. For details about how to obtain the driver link, see Obtaining the Driver Link from Public Network.
    • If the download link is an OBS URL, you do not need to bind an EIP to GPU nodes. For details about how to obtain the driver link, see Obtaining the Driver Link from OBS.
    • Ensure that the NVIDIA driver version matches the GPU node. For details about the version mapping, see Supported GPU Drivers.
    • If the driver version is changed, restart the node to apply the change.
    • Use driver version 470 or later for Huawei Cloud EulerOS 2.0 on which Linux Kernel 5.x is built, and driver 515 or later for Ubuntu 22.04.

    After the add-on is installed, you can configure GPU virtualization and node pool drivers on the Heterogeneous Resources tab in Settings.

  3. Click Install.

    If the add-on is uninstalled, GPU pods newly scheduled to the nodes cannot run properly, but GPU pods already running on the nodes will not be affected.

Verifying the Add-on

After the add-on is installed, run the nvidia-smi command on the GPU node and the container that schedules GPU resources to verify the availability of the GPU device and driver.

  • GPU node:
    # If the add-on version is earlier than 2.0.0, run the following command:
    cd /opt/cloud/cce/nvidia/bin && ./nvidia-smi
    
    # If the add-on version is 2.0.0 or later and the driver installation path is changed, run the following command:
    cd /usr/local/nvidia/bin && ./nvidia-smi
  • Container:
    cd /usr/local/nvidia/bin && ./nvidia-smi

If GPU information is returned, the device is available and the add-on has been installed.

Supported GPU Drivers

  • The list of supported GPU drivers applies only to GPU add-ons of 1.2.28 and later versions.
  • If you want to use the latest GPU driver, upgrade your GPU add-on to the latest version.
Table 2 Supported GPU drivers

GPU Model

Supported Cluster Type

Specification

OS

Huawei Cloud EulerOS 2.0 (GPU Virtualization Supported)

Ubuntu 22.04.4

Ubuntu 22.04.3

CentOS Linux release 7.6

EulerOS release 2.9

EulerOS release 2.5

Ubuntu 18.04 (end of maintenance)

EulerOS release 2.3 (end of maintenance)

Tesla T4

CCE standard cluster

g6

pi2

535.54.03

510.47.03

470.57.02

535.161.08

535.54.03

470.141.03

535.54.03

470.141.03

535.54.03

470.141.03

535.54.03

470.141.03

470.141.03

470.141.03

Volta V100

CCE standard cluster

p2s

p2vs

p2v

535.54.03

510.47.03

470.57.02

535.161.08

535.54.03

470.141.03

535.54.03

470.141.03

535.54.03

470.141.03

535.54.03

470.141.03

470.141.03

470.141.03

Obtaining the Driver Link from Public Network

  1. Log in to the CCE console.
  2. Click Create Node and select the GPU node to be created in the Specifications area. The GPU card model of the node is displayed in the lower part of the page.

    Figure 1 Viewing the GPU card model

  1. Visit https://www.nvidia.com/Download/Find.aspx?lang=en.
  2. Select the driver information on the NVIDIA Driver Downloads page, as shown in Figure 2. Operating System must be Linux 64-bit.

    Figure 2 Setting parameters

  3. After confirming the driver information, click SEARCH. A page is displayed, showing the driver information, as shown in Figure 3. Click DOWNLOAD.

    Figure 3 Driver information

  4. Obtain the driver link in either of the following ways:

    • Method 1: As shown in Figure 4, find url=/tesla/470.103.01/NVIDIA-Linux-x86_64-470.103.01.run in the browser address box. Then, supplement it to obtain the driver link https://us.download.nvidia.com/tesla/470.103.01/NVIDIA-Linux-x86_64-470.103.01.run. By using this method, you must bind an EIP to each GPU node.
    • Method 2: As shown in Figure 4, click AGREE & DOWNLOAD to download the driver. Then, upload the driver to OBS and record the OBS URL. By using this method, you do not need to bind an EIP to GPU nodes.
      Figure 4 Obtaining the link

Obtaining the Driver Link from OBS

  1. Upload the driver to OBS and set the driver file to public read. For details, see Uploading an Object.

    When the node is restarted, the driver will be downloaded and installed again. Ensure that the OBS bucket link of the driver is valid.

  2. In the bucket list, click a bucket name, and then the Overview page of the bucket is displayed.
  3. In the navigation pane, choose Objects.
  4. Select the name of the target object and copy the driver link on the object details page.

    Figure 5 Copying an OBS link

Components

Table 3 Add-on components

Component

Description

Resource Type

nvidia-driver-installer

A workload for installing the NVIDIA GPU driver on a node, which only uses resources during the installation process (Once the installation is finished, no resources are used.)

DaemonSet

nvidia-gpu-device-plugin

A Kubernetes device plugin that provides NVIDIA GPU heterogeneous compute for containers

DaemonSet

nvidia-operator

A component that provides NVIDIA GPU node management capabilities for clusters

Deployment

GPU Metrics

Table 4 Basic GPU monitoring metrics

Type

Metric

Monitoring Level

Description

Utilization

cce_gpu_utilization

GPU cards

GPU compute usage

cce_gpu_memory_utilization

GPU cards

GPU memory usage

cce_gpu_encoder_utilization

GPU cards

GPU encoding usage

cce_gpu_decoder_utilization

GPU cards

GPU decoding usage

cce_gpu_utilization_process

GPU processes

GPU compute usage of each process

cce_gpu_memory_utilization_process

GPU processes

GPU memory usage of each process

cce_gpu_encoder_utilization_process

GPU processes

GPU encoding usage of each process

cce_gpu_decoder_utilization_process

GPU processes

GPU decoding usage of each process

Memory

cce_gpu_memory_used

GPU cards

Used GPU memory

cce_gpu_memory_total

GPU cards

Total GPU memory

cce_gpu_memory_free

GPU cards

Free GPU memory

cce_gpu_bar1_memory_used

GPU cards

Used GPU BAR1 memory

cce_gpu_bar1_memory_total

GPU cards

Total GPU BAR1 memory

Frequency

cce_gpu_clock

GPU cards

GPU clock frequency

cce_gpu_memory_clock

GPU cards

GPU memory frequency

cce_gpu_graphics_clock

GPU cards

GPU frequency

cce_gpu_video_clock

GPU cards

GPU video processor frequency

Physical status

cce_gpu_temperature

GPU cards

GPU temperature

cce_gpu_power_usage

GPU cards

GPU power

cce_gpu_total_energy_consumption

GPU cards

Total GPU energy consumption

Bandwidth

cce_gpu_pcie_link_bandwidth

GPU cards

GPU PCIe bandwidth

cce_gpu_nvlink_bandwidth

GPU cards

GPU NVLink bandwidth

cce_gpu_pcie_throughput_rx

GPU cards

GPU PCIe RX bandwidth

cce_gpu_pcie_throughput_tx

GPU cards

GPU PCIe TX bandwidth

cce_gpu_nvlink_utilization_counter_rx

GPU cards

GPU NVLink RX bandwidth

cce_gpu_nvlink_utilization_counter_tx

GPU cards

GPU NVLink TX bandwidth

Memory isolation page

cce_gpu_retired_pages_sbe

GPU cards

Number of isolated GPU memory pages with single-bit errors

cce_gpu_retired_pages_dbe

GPU cards

Number of isolated GPU memory pages with dual-bit errors

Table 5 xGPU metrics

Metric

Monitoring Level

Description

xgpu_memory_total

GPU processes

Total xGPU memory

xgpu_memory_used

GPU processes

Used xGPU memory

xgpu_core_percentage_total

GPU processes

Total xGPU cores

xgpu_core_percentage_used

GPU processes

Used xGPU cores

gpu_schedule_policy

GPU cards

xGPU scheduling policy. Options:

  • 0: xGPU memory is isolated and cores are shared.
  • 1: Both xGPU memory and cores are isolated.
  • 2: default mode, indicating that the current card is not used by any xGPU device for allocation.

xgpu_device_health

GPU cards

Health status of an xGPU device. Options:

  • 0: The xGPU device is healthy.
  • 1: The xGPU device is not healthy.

Change History

Table 6 Release history

Add-on Version

Supported Cluster Version

New Feature

2.7.19

v1.28

v1.29

v1.30

Fixed the nvidia-container-toolkit CVE-2024-0132 container escape vulnerability.

2.7.13

v1.28

v1.29

v1.30

  • Supported xGPU configuration by node pool.
  • Supported GPU rendering.
  • Clusters 1.30 are supported.

2.6.4

v1.28

v1.29

Updated the isolation logic of GPU cards.

2.6.1

v1.28

v1.29

Upgraded the base images of the add-on.

2.5.6

v1.28

Fixed an issue that occurred during the installation of the driver.

2.5.4

v1.28

Clusters 1.28 are supported.

2.1.14

v1.21

v1.23

v1.25

v1.27

Fixed the nvidia-container-toolkit CVE-2024-0132 container escape vulnerability.

2.1.8

v1.21

v1.23

v1.25

v1.27

Fixed some issues.

2.0.69

v1.21

v1.23

v1.25

v1.27

Upgraded the base images of the add-on.

2.0.46

v1.21

v1.23

v1.25

v1.27

  • Supported Nvidia driver 535.
  • Non-root users can use xGPUs.
  • Optimized startup logic.

2.0.18

v1.21

v1.23

v1.25

v1.27

Supported Huawei Cloud EulerOS 2.0.

1.2.28

v1.19

v1.21

v1.23

v1.25

  • Adapted to Ubuntu 22.04.
  • Optimized the automatic mounting of the GPU driver directory.

1.2.24

v1.19

v1.21

v1.23

v1.25

  • Enabled a node pool to configure GPU driver versions.
  • Supported GPU metric collection.

1.2.20

v1.19

v1.21

v1.23

v1.25

Set the add-on alias to gpu.

1.2.17

v1.15

v1.17

v1.19

v1.21

v1.23

Added the nvidia-driver-install pod limits configuration.

1.2.15

v1.15

v1.17

v1.19

v1.21

v1.23

CCE clusters 1.23 are supported.

1.2.11

v1.15

v1.17

v1.19

v1.21

Supported EulerOS 2.10.

1.2.10

v1.15

v1.17

v1.19

v1.21

CentOS supports the GPU driver of the new version.

1.2.9

v1.15

v1.17

v1.19

v1.21

CCE clusters 1.21 are supported.

1.2.2

v1.15

v1.17

v1.19

Supported the new EulerOS kernel.

1.2.1

v1.15

v1.17

v1.19

  • CCE clusters 1.19 are supported.
  • Added taint tolerance configuration.

1.1.13

v1.13

v1.15

v1.17

Supported kernel-3.10.0-1127.19.1.el7.x86_64 for CentOS 7.6.

1.1.11

v1.15

v1.17

  • Allowed users to customize driver addresses to download drivers.
  • Clusters 1.15 and 1.17 are supported.