Updated on 2024-09-19 GMT+08:00

gpu-device-plugin

Introduction

gpu-device-plugin is an add-on that supports GPUs in containers. If GPU nodes are used in the cluster, this add-on must be installed.

Constraints

  • The driver to be downloaded must be a .run file.
  • Only NVIDIA Tesla drivers are supported.
  • When installing or reinstalling the add-on, ensure that the driver download address is correct and accessible. CCE does not verify the address validity.
  • gpu-device-plugin enables you to download the driver and execute the installation script. The add-on status does not indicate whether the driver is installed successfully.
  • If a node has multiple A100 or A800 GPUs, you need to manually install nvidia-fabricmanager that matches your driver version. For details, see Installing the nvidia-fabricmanager Service.

Installing the Add-on

  1. Log in to the CCE console and click the cluster name to access the cluster console. In the navigation pane, choose Add-ons.
  2. Locate gpu-device-plugin in Add-ons Available and click Install.
  3. In the window that slides out from the right, configure the parameters as follows:

    • Add-on Specifications: Select Default or Custom as required.
    • Containers: This parameter can be configured only when Add-on Specifications is set to Custom.
    • NVIDIA Driver: Use a driver address provided by CCE or enter the address of your custom NVIDIA driver. All GPU nodes in the cluster use the same driver.

      GPU virtualization is available only in versions 470.57.02, 470.103.01, 470.141.03, 510.39.01, and 510.47.03.

      You are advised to use a driver address provided by CCE to match the driver version.
      • If the download link is a public network address, for example, NVIDIA official website address (https://us.download.nvidia.com/tesla/470.103.01/NVIDIA-Linux-x86_64-470.103.01.run), associate EIPs with GPU nodes. For details about how to obtain the driver link, see Obtaining the Driver Link from Public Network.
      • If the download link is an OBS URL, there is no need to bind an EIP to each GPU node. For details about how to obtain the driver link, see Obtaining the Driver Link from OBS.
      • Ensure that the NVIDIA driver version matches the GPU node.
      • If the driver version is changed, restart the node to apply the change.
      • Use driver 470 or later for Huawei Cloud EulerOS 2.0 or Ubuntu 22.04 on which Linux Kernel 5.x is built.
      Figure 1 Installing gpu-device-plugin

  4. Click Install.

Verifying the Add-on

After the add-on is installed, run the nvidia-smi command on the GPU node and the container that schedules GPU resources to verify the availability of the GPU and driver.

GPU node:
cd /usr/local/nvidia/bin &&./nvidia-smi

Container:

nvidia-smi

If GPU information is returned, the GPU is available and the add-on is successfully installed.

Obtaining the Driver Link from Public Network

  1. Log in to the CCE console.
  2. Click Create Node and select the GPU node to be created in the Specifications area. The GPU card model of the node is displayed in the lower part of the page.
  1. Log in to NVIDIA.
  2. Select the driver information on the NVIDIA Driver Downloads page, as shown in Figure 2. Operating System must be Linux 64-bit.

    Figure 2 Setting parameters

  3. After confirming the driver information, click SEARCH. A page is displayed, showing the driver information, as shown in Figure 3. Click DOWNLOAD.

    Figure 3 Driver information

  4. Obtain the driver link in either of the following ways:

    • Method 1: As shown in Figure 4, find url=/tesla/470.103.01/NVIDIA-Linux-x86_64-470.103.01.run in the browser address box. Then, supplement it to obtain the driver link (https://us.download.nvidia.com/tesla/470.103.01/NVIDIA-Linux-x86_64-470.103.01.run). By using this method, you must associate EIPs with GPU nodes.
    • Method 2: As shown in Figure 4, click Agree & Download to download the driver. Then, upload the driver to OBS and record the OBS URL. By using this method, you do not need to associate EIPs with GPU nodes.
      Figure 4 Obtaining the link

Obtaining the Driver Link from OBS

  1. Upload the driver to OBS and set the driver file to public read. For details, see Uploading a File.

    When the node is restarted, the driver will be downloaded and installed again. Ensure that the OBS bucket link of the driver is valid.

  2. Log in to the OBS console. In the navigation pane, select Object Storage.
  3. In the bucket list, click the bucket name you want. The Overview page of the bucket is displayed.
  4. In the navigation pane, choose Objects.
  5. Locate the target object and choose More > Copy Object URL to copy the driver link.

    Figure 5 Obtaining the driver link

Installing the nvidia-fabricmanager Service

A100 and A800 GPUs support NvLink and NvSwitch. If you use a node with multiple GPUs, you need to install the nvidia-fabricmanager service corresponding to your driver version to enable interconnection between GPUs. Otherwise, GPU pods may fail to be used.

This section uses driver 470.103.01 as an example. You can perform the following steps to install the driver. Replace the driver version as required.

  1. Log in to the target GPU node. An EIP must be bound to the node to download the nvidia-fabricmanager service.
  2. Install the nvidia-fabricmanager service corresponding to your driver version. You can download the installation package corresponding to your OS and driver version from the official website.

    • CentOS
      Take CentOS 7 as an example:
      driver_version=470.103.01
      wget https://developer.download.nvidia.cn/compute/cuda/repos/rhel7/x86_64/cuda-drivers-fabricmanager-${driver_version}-1.x86_64.rpm
      rpm -ivh nvidia-fabric-manager-${driver_version}-1.x86_64.rpm
    • Other OSs such as Ubuntu
      Take Ubuntu 18.04 as an example:
      driver_version=470.103.01
      driver_version_main=$(echo $driver_version | awk -F '.' '{print $1}')
      wget https://developer.download.nvidia.cn/compute/cuda/repos/ubuntu1804/x86_64/nvidia-fabricmanager-${driver_version_main}_${driver_version}-1_amd64.deb
      dpkg -i nvidia-fabricmanager-${driver_version_main}_${driver_version}-1_amd64.deb

  3. Start the nvidia-fabricmanager service.

    systemctl enable nvidia-fabricmanager
    systemctl start nvidia-fabricmanager

  4. Run the following command to check the nvidia-fabricmanager service status:

    systemctl status nvidia-fabricmanager