Help Center> Intelligent EdgeFabric> FAQs> Edge Application FAQs> How Do Applications Schedule GPU Resources?
Updated on 2022-02-21 GMT+08:00

How Do Applications Schedule GPU Resources?

  • IEF allows multiple applications to share the same GPUs.
  • IEF allows a single application to use multiple GPUs.
  • GPU resources are scheduled based on the GPU memory capacity in pre-allocation mode but not real-time allocation mode.
  • If the GPU memory required by an application is smaller than that of a single GPU, GPU resources can be scheduled in shared mode. To be specific, sort the remaining memory sizes of GPUs in ascending order and allocate the first GPU that meets the resource requirement to the application. For example, there are three GPUs A, B, and C. Each GPU provides 8 GB memory. The remaining memory sizes of the three GPUs are 2 GB, 4 GB, and 6 GB, respectively. Application A requires 3 GB GPU memory. In this case, GPU B is allocated to this application.
  • If the GPU memory required by an application is greater than that of a single GPU, multiple GPUs will be allocated to the application. To be specific, the total memory capacity of the selected GPUs is allocated to the application even if the total memory is greater than the memory required by the application. For example, there are three GPUs A, B, and C. Each GPU provides 8 GB memory. The remaining memory sizes of the three GPUs are 8 GB, 8 GB, and 6 GB, respectively. Application B requires 14 GB GPU memory. In this case, the total memory of GPUs A and B is allocated to this application, and any other applications cannot schedule the memory of GPU A or B.

Edge Application FAQs FAQs

more