Help Center> Cloud Container Engine> User Guide> Add-ons> CCE AI Suite (NVIDIA GPU)
Updated on 2024-03-11 GMT+08:00

CCE AI Suite (NVIDIA GPU)

Introduction

NVIDIA GPU is a device management add-on that supports GPUs in containers. To use GPU nodes in a cluster, this add-on must be installed.

Constraints

  • The driver to be downloaded must be a .run file.
  • Only NVIDIA Tesla drivers are supported, not GRID drivers.
  • When installing or reinstalling the add-on, ensure that the driver download address is correct and accessible. CCE does not verify the address validity.
  • The gpu-beta add-on only enables you to download the driver and execute the installation script. The add-on status only indicates that how the add-on is running, not whether the driver is successfully installed.
  • CCE does not guarantee the compatibility between the GPU driver version and the CDUA library version of your application. You need to check the compatibility by yourself.
  • If a custom OS image has had a a GPU driver installed, CCE cannot ensure that the GPU driver is compatible with other GPU components such as the monitoring components used in CCE.
  • If the version of the GPU driver you used is not included in the Supported GPU Drivers, the GPU driver may be incompatible with the OS, instance type, or container runtime. As a result, the driver installation may fail or the GPU add-on may be abnormal. If you use a customized GPU driver, verify its availability.

Installing the Add-on

  1. Log in to the CCE console and click the cluster name to access the cluster console. Choose Add-ons in the navigation pane, locate CCE AI Suite (NVIDIA GPU) on the right, and click Install.
  2. Configure the add-on parameters.

    • NVIDIA Driver: Enter the link for downloading the NVIDIA driver. All GPU nodes in the cluster will use this driver.
      • If the download link is a public network address, for example, https://us.download.nvidia.com/tesla/470.103.01/NVIDIA-Linux-x86_64-470.103.01.run, bind an EIP to each GPU node. For details about how to obtain the driver link, see Obtaining the Driver Link from Public Network.
      • If the download link is an OBS URL, you do not need to bind an EIP to GPU nodes. For details about how to obtain the driver link, see Obtaining the Driver Link from OBS.
      • Ensure that the NVIDIA driver version matches the GPU node.
      • After the driver version is changed, restart the node for the change to take effect.
      • Use driver version 470 or later for Huawei Cloud EulerOS 2.0 on which Linux Kernel 5.x is built, and driver 515 or later for Ubuntu 22.04.
    • Driver Selection: If you do not want all GPU nodes in a cluster to use the same driver, CCE allows you to install a different GPU driver for each node pool.
      • The add-on installs the driver with the version specified by the node pool. The driver takes effect only for new pool nodes.
      • After the driver version is updated, it takes effect on the nodes newly added to the node pool. Existing nodes must restart to apply the changes.
    • GPU virtualization (supported in 2.0.5 and later versions): Enable GPU virtualization to support the segmentation and isolation for the compute power and GPU memory of a single GPU.
      Figure 1 Enabling GPU Virtualization

      If the Volcano add-on has not been installed in the cluster, GPU virtualization cannot be enabled. Click One-click installation to install it. To configure the Volcano add-on parameters during installation, click Custom Installation. For details, see Volcano Scheduler.

      If the Volcano add-on has been installed in the cluster but its version does not support GPU virtualization, click Upgrade to upgrade it. To configure the Volcano add-on parameters during installation, click Custom Upgrade. For details, see Volcano Scheduler.

      After GPU virtualization is enabled, select Virtualization nodes are compatible with GPU sharing mode, that is, default GPU scheduling in Kubernetes is supported. This capability requires that the version of gpu-device-plugin is 2.0.10 or later and the version of Volcano is 1.10.5 or later.

      • If you enable compatibility, the nvidia.com/gpu quota specified in workloads (the nvidia.com/gpu quota is set to a decimal fraction, for example, 0.5) is provided by GPU virtualization to implement GPU memory isolation. The GPU memory is allocated to containers based on the specified quota. For example, 8 GiB (0.5 x 16 GiB) GPU memory is allocated. The value of GPU memory must be an integer multiple of 128 MiB. Otherwise, the value is automatically rounded down to the nearest integer. If nvidia.com/gpu resources have been used in the workload before compatibility is enabled, the resources will not be provided by GPU virtualization but the entire GPU.
      • After compatibility is enabled, if you use the nvidia.com/gpu quota, it is equivalent to enabling GPU memory isolation. The nvidia.com/gpu quota can share a GPU with workloads in GPU memory isolation mode, but cannot share a GPU with workloads in compute and GPU memory isolation mode. In addition, Constraints on GPU virtualization must be followed.
      • If compatibility is disabled, the nvidia.com/gpu quota specified in the workload only affects the scheduling result. It does not require GPU memory isolation. That is, although the nvidia.com/gpu quota is set to 0.5, you can still view complete GPU memory in the container. In addition, workloads using nvidia.com/gpu resources and workloads using virtualized GPU memory cannot be scheduled to the same node.
      • If you deselect Virtualization nodes are compatible with GPU sharing mode, running workloads will not be affected, but workloads may fail to be scheduled. For example, if compatibility is disabled, the workload using nvidia.com/gpu resources are still in the GPU memory isolation mode. As a result, the GPU cannot schedule workloads in compute and GPU memory isolation mode. You need to delete workloads using nvidia.com/gpu resources before rescheduling.

  3. Click Install.

    If the add-on is uninstalled, GPU pods newly scheduled to the nodes cannot run properly, but GPU pods already running on the nodes will not be affected.

Verifying the Add-on

After the add-on is installed, run the nvidia-smi command on the GPU node and the container that schedules GPU resources to verify the availability of the GPU device and driver.

  • GPU node:
    # If the add-on version is earlier than 2.0.0, run the following command:
    cd /opt/cloud/cce/nvidia/bin && ./nvidia-smi
    
    # If the add-on version is 2.0.0 or later and the driver installation path is changed, run the following command:
    cd /usr/local/nvidia/bin && ./nvidia-smi
  • Container:
    cd /usr/local/nvidia/bin && ./nvidia-smi

If GPU information is returned, the device is available and the add-on has been installed.

Supported GPU Drivers

  • The list of supported GPU drivers applies only to GPU add-ons of 1.2.28 and later versions.
  • If you want to use the latest GPU driver, upgrade your GPU add-on to the latest version.
Table 1 Supported GPU Drivers

GPU Model

Supported Cluster Type

Specification

OS

Huawei Cloud EulerOS 2.0 (GPU virtualization supported)

Ubuntu 22.04

CentOS Linux release 7.6

EulerOS release 2.9

EulerOS release 2.5

Ubuntu 18.04 (end of maintenance)

EulerOS release 2.3 (end of maintenance)

Tesla T4

CCE standard cluster

g6

pi2

535.54.03

510.47.03

470.57.02

535.54.03

470.141.03

535.54.03

470.141.03

535.54.03

470.141.03

535.54.03

470.141.03

470.141.03

470.141.03

Volta V100

CCE standard cluster

p2s

p2vs

p2v

535.54.03

510.47.03

470.57.02

535.54.03

470.141.03

535.54.03

470.141.03

535.54.03

470.141.03

535.54.03

470.141.03

470.141.03

470.141.03

Obtaining the Driver Link from Public Network

  1. Log in to the CCE console.
  2. Click Create Node and select the GPU node to be created in the Specifications area. The GPU card model of the node is displayed in the lower part of the page.

    Figure 2 Viewing the GPU card model

  1. Visit https://www.nvidia.com/Download/Find.aspx?lang=en.
  2. Select the driver information on the NVIDIA Driver Downloads page, as shown in Figure 3. Operating System must be Linux 64-bit.

    Figure 3 Setting parameters

  3. After confirming the driver information, click SEARCH. A page is displayed, showing the driver information, as shown in Figure 4. Click DOWNLOAD.

    Figure 4 Driver information

  4. Obtain the driver link in either of the following ways:

    • Method 1: As shown in Figure 5, find url=/tesla/470.103.01/NVIDIA-Linux-x86_64-470.103.01.run in the browser address box. Then, supplement it to obtain the driver link https://us.download.nvidia.com/tesla/470.103.01/NVIDIA-Linux-x86_64-470.103.01.run. By using this method, you must bind an EIP to each GPU node.
    • Method 2: As shown in Figure 5, click AGREE & DOWNLOAD to download the driver. Then, upload the driver to OBS and record the OBS URL. By using this method, you do not need to bind an EIP to GPU nodes.
      Figure 5 Obtaining the link

Obtaining the Driver Link from OBS

  1. Upload the driver to OBS and set the driver file to public read. For details, see Uploading an Object.

    When the node is restarted, the driver will be downloaded and installed again. Ensure that the OBS bucket link of the driver is valid.

  2. In the bucket list, click a bucket name, and then the Overview page of the bucket is displayed.
  3. In the navigation pane, choose Objects.
  4. Select the name of the target object and copy the driver link on the object details page.

    Figure 6 Copying an OBS link

Components

Table 2 GPU component

Component

Description

Resource Type

nvidia-driver-installer

Used for installing an NVIDIA driver on GPU nodes.

DaemonSet

Change History

Table 3 Release history

Add-on Version

Supported Cluster Version

New Feature

2.5.4

v1.28

Clusters 1.28 are supported.

2.0.46

v1.21

v1.23

v1.25

v1.27

  • Supported Nvidia driver 535.
  • Non-root users can use xGPUs.
  • Optimized startup logic.

2.0.18

v1.21

v1.23

v1.25

v1.27

Supported Huawei Cloud EulerOS 2.0.

1.2.28

v1.19

v1.21

v1.23

v1.25

  • Adapts to Ubuntu 22.04.
  • Optimizes the automatic mounting of the GPU driver directory.

1.2.24

v1.19

v1.21

v1.23

v1.25

  • Enables the node pool to configure GPU driver versions.
  • Supports GPU metric collection.

1.2.20

v1.19

v1.21

v1.23

v1.25

Sets the add-on alias to gpu.

1.2.17

v1.15

v1.17

v1.19

v1.21

v1.23

Adds the nvidia-driver-install pod limits configuration.

1.2.15

v1.15

v1.17

v1.19

v1.21

v1.23

CCE clusters 1.23 are supported.

1.2.11

v1.15

v1.17

v1.19

v1.21

Supports EulerOS 2.10.

1.2.10

v1.15

v1.17

v1.19

v1.21

CentOS supports the GPU driver of the new version.

1.2.9

v1.15

v1.17

v1.19

v1.21

CCE clusters 1.21 are supported.

1.2.2

v1.15

v1.17

v1.19

Supports the new EulerOS kernel.

1.2.1

v1.15

v1.17

v1.19

  • CCE clusters 1.19 are supported.
  • Adds taint tolerance configuration.

1.1.13

v1.13

v1.15

v1.17

Supports kernel-3.10.0-1127.19.1.el7.x86_64 for CentOS 7.6.

1.1.11

v1.15

v1.17

  • Allows users to customize driver addresses to download drivers.
  • Supports clusters v1.15 and v1.17.