Help Center/ ModelArts/ Troubleshooting/ Inference Deployment/ Service Deployment/ Error "No CUDA runtime is found" Occurred When a Real-Time Service Is Deployed
Updated on 2023-11-21 GMT+08:00

Error "No CUDA runtime is found" Occurred When a Real-Time Service Is Deployed

Symptom

When a real-time service is deployed, the following error occurred: No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda'.

Possible Causes

According to the error "No CUDA runtime is found" in logs, CUDA runtime was not found.

Solution

Perform the following operations:

  1. Check whether a GPU flavor is selected for deploying the real-time service.
  2. Add os.system('nvcc -V) into customize_service.py to view the CUDA version of the image. For details about how to write customize_service.py, see Specifications for Writing Model Inference Code.
  3. Check whether the CUDA version matches the installed MMCV version.

    Selecting a GPU flavor if the model and inference script require GPUs.