Updated on 2024-08-16 GMT+08:00

Overview

By default, the Kubernetes scheduler automatically allocates workloads in an optimal manner, including distributing pods to nodes with sufficient resources. If you need to specify the node where a pod should be scheduled, you can set up a workload scheduling policy to define the scheduling requirements. For example, you can deploy frontend pods and backend pods together, schedule certain types of applications to specific nodes, and deploy different applications on different nodes.

Use the methods listed in the following table to select a pod scheduling policy in Kubernetes.

Table 1 Workload scheduling policies

Scheduling Policy

YAML Field

Description

Reference

Node selector

nodeSelector

A basic scheduling mode, in which Kubernetes selects the target node according to node labels. This means that pods are only scheduled to the node that has the specific label.

Configuring Specified Node Scheduling (nodeSelector)

Node affinity

nodeAffinity

Node affinity is an improved version of nodeSelector that supports both required and preferred affinity types (Affinity Types). It filters objects that require affinity through Label Selectors.

NOTE:

When both nodeSelector and nodeAffinity are specified, a pod can only be scheduled to a candidate node if both conditions are met.

Configuring Node Affinity Scheduling

Workload affinity or anti-affinity scheduling

podAffinity/podAntiAffinity

Label selectors (Label Selectors) for workloads filter objects that require affinity and schedule new workloads to the node or node group where the target object is located. These selectors also support both required and preferred affinity types (Affinity Types).

NOTE:

Workload affinity and anti-affinity require a certain amount of computing time, which significantly slows down scheduling in large-scale clusters. Do not enable workload affinity and anti-affinity in a cluster that contains hundreds of nodes.

Configuring Workload Affinity or Anti-affinity Scheduling

Affinity Types

Scheduling policies that use node affinity or workload affinity/anti-affinity can include both hard and soft constraints to meet complex scheduling requirements. Hard constraints must be met, while soft constraints should be met as much as possible.

Table 2 Affinity types

Rule Type

YAML Field

Description

Example

Required

requiredDuringSchedulingIgnoredDuringExecution

Hard constraint that must be met. The scheduler can perform scheduling only when the rule is met.

Preferred

preferredDuringSchedulingIgnoredDuringExecution

Soft constraint. The scheduler tries to locate the target object that satisfies the target rule. The scheduler will schedule the pod even if it cannot find a matching target object that satisfies the target rule.

When using preferred affinity, you can set a weight field ranging from 1 to 100 for each pod. Assigning a higher weight to a pod will increase its priority in the scheduling process.

The YAML field requiredDuringScheduling or preferredDuringScheduling in the affinity rules above indicates that a label rule must be forcibly met or needs to be met as much as possible during scheduling. IgnoredDuringExecution indicates that any changes to the node label after Kubernetes schedules the pod will not affect the pod's running or cause it to be rescheduled. However, if kubelet on the node is restarted, kubelet will recheck the node affinity rule, and the pod will still be scheduled to another node.

Label Selectors

When creating a scheduling policy use the logical operators of a label selector to filter label values and identify the objects that need affinity or anti-affinity.

Table 3 Label selectors

Parameter

Description

YAML Example

key

Label key. The objects that meet the filter criteria must contain the label of the key, and the label value must meet the operation relationship between the label value list (values field) and logical operators.

In the example below, objects that meet the filter criteria must have a label with a key of topology.kubernetes.io/zone and a value of either az1 or az2.

matchExpressions:
  - key: topology.kubernetes.io/zone
    operator: In
    values:
    - az1
    - az2

operator

Logical operator that can be used to determine filtering rules for label values. Options:

  • In: The label of the affinity or anti-affinity object is in the label value list (values field).
  • NotIn: The label of the affinity or anti-affinity object is not in the label value list (values field).
  • Exists: The affinity or anti-affinity object has a specified label key. There is no need to configure the label value list (values field).
  • DoesNotExist: The affinity or anti-affinity object does not have a specified label key. There is no need to configure the label value list (values field).
  • Gt: (available only for node affinity) The label value of the scheduled node is greater than the list value (string comparison).
  • Lt: (available only for node affinity) The label value of the scheduled node is less than the list value (string comparison).

values

Label values