Help Center/ Cloud Container Engine/ User Guide/ Scheduling/ GPU Scheduling/ GPU Driver Version/ Upgrading the Driver Version of a GPU Node Using a Node Pool
Updated on 2025-01-07 GMT+08:00

Upgrading the Driver Version of a GPU Node Using a Node Pool

To ensure proper functioning of GPU nodes, upgrade the NVIDIA driver version if it does not match the CUDA library you use. It is recommended that you use node pools to effectively manage node NVIDIA driver versions. This allows you to schedule applications to a specific node pool with a designated driver version. Additionally, when upgrading drivers, you can perform the upgrade in batches by node pool.

When upgrading the NVIDIA driver of a node by node pool, note that the driver will be reinstalled during the node restart. To prevent any issues, ensure that there are no running tasks on the node before upgrading the driver.

Step 1: Specify the Driver Version of a Node Pool

  1. Log in to the target node and check its driver version. In this example, the driver version is 510.47.03.

    # If the add-on version is earlier than 2.0.0, run the following command:
    cd /opt/cloud/cce/nvidia/bin && ./nvidia-smi   
    # If the add-on version is 2.0.0 or later and the driver installation path is changed, run the following command:
    cd /usr/local/nvidia/bin && ./nvidia-smi

  2. Log in to the CCE console and click the cluster name to access the cluster console. In the navigation pane, choose Settings.
  3. Click the Heterogeneous Resources tab page. In the Node Node Pool Configurations pane, select the target node pool and driver, or enter the link to the custom driver.

    In this section, the driver will be upgraded to 535.54.03.

  4. Click Confirm Configuration.

Step 2: Restart the Nodes in the Node Pool

Evict pods on a node before restarting the node. For details, see Draining a Node. Make sure to reserve GPU resources to avoid pod scheduling failure during node drainage. Insufficient resources can affect service running.

  1. Log in to the CCE console and click the cluster name to access the cluster console.
  2. Choose Nodes, locate the target node pool, and click View Node.

  3. Click the node name and navigate to the ECS page.

  4. In the upper right corner, click Restart.

Step 3: Verify the Driver Upgrade

  1. Wait a few minutes after restarting the node for the driver to install.
  2. Log in to the node and check whether the driver on the node has been updated.

    # If the add-on version is earlier than 2.0.0, run the following command:
     cd /opt/cloud/cce/nvidia/bin && ./nvidia-smi   
    # If the add-on version is 2.0.0 or later and the driver installation path is changed, run the following command:
     cd /usr/local/nvidia/bin && ./nvidia-smi

    Verify the driver version on the node. The figure shows an updated version of 535.54.03.

  3. Confirm that the node and its services are running correctly. Then, perform the same operations on the remaining nodes in the node pool individually.