Updated on 2024-12-04 GMT+08:00

Overview

CCE supports different types of resource scheduling and task scheduling, improving application performance and overall cluster resource utilization. This section describes the main functions of CPU resource scheduling, GPU/NPU heterogeneous resource scheduling, and Volcano scheduling.

CPU Scheduling

CCE provides CPU policies to allocate complete physical CPU cores to applications, improving application performance and reducing application scheduling latency.

Function

Description

Documentation

CPU policy

When many CPU-intensive pods are running on a node, workloads may be migrated to different CPU cores. Many workloads are not sensitive to this migration and thus work fine without any intervention. For CPU-sensitive applications, you can use the CPU policy provided by Kubernetes to allocate dedicated cores to applications, improving application performance and reducing application scheduling latency.

CPU Policy

Enhanced CPU policy

Based on the Kubernetes static core binding policy, the enhanced CPU policy (enhanced-static) supports burstable pods (whose CPU requests and limits must be positive integers) and allows them to preferentially use certain CPUs to ensure application stability.

Enhanced CPU Policy

GPU Scheduling

CCE schedules heterogeneous GPU resources in clusters and allows GPUs to be used in containers.

Function

Description

Documentation

Default GPU scheduling in Kubernetes

This function allows you to specify the number of GPUs that a pod requests. The value can be less than 1 so that multiple pods can share a GPU.

Default GPU Scheduling in Kubernetes

NPU Scheduling

CCE schedules heterogeneous NPU resources in a cluster to quickly and efficiently perform inference and image recognition.

Function

Description

Documentation

NPU scheduling

NPU scheduling allows you to specify the number of NPUs that a pod requests to provide NPU resources for workloads.

NPU Scheduling

Volcano Scheduling

Volcano is a Kubernetes-based batch processing platform that supports machine learning, deep learning, bioinformatics, genomics, and other big data applications. It provides general-purpose, high-performance computing capabilities, such as job scheduling, heterogeneous chip management, and job running management.

Function

Description

Documentation

Resource utilization-based scheduling

Scheduling policies are optimized for computing resources to effectively reduce resource fragments on each node and maximize computing resource utilization.

Resource Usage-based Scheduling

Priority-based scheduling

Scheduling policies are customized based on service importance and priorities to guarantee the resources of key services.

Priority-based Scheduling

AI performance-based scheduling

Scheduling policies are configured based on the nature and resource usage of AI tasks to increase the throughput of cluster services and improve service performance.

AI Performance-based Scheduling

NUMA affinity scheduling

Volcano targets to lift the limitation to make scheduler NUMA topology aware so that:

  • Pods are not scheduled to the nodes that NUMA topology does not match.
  • Pods are scheduled to the best node for NUMA topology.

NUMA Affinity Scheduling