Help Center/ Cloud Container Engine/ FAQs/ Workload/ Scheduling Policies/ What Should I Do If the Evicted Pods Are Scheduled Back to the Original Node Due to Changes in the kubelet Parameters?
Updated on 2025-04-28 GMT+08:00

What Should I Do If the Evicted Pods Are Scheduled Back to the Original Node Due to Changes in the kubelet Parameters?

Symptom

If a node experiences memory, disk, or PID pressure, it will be marked with a system taint. In such cases, if the configuration parameters of the kubelet in the node pool where the node belongs to are changed or if the kubelet of the node is restarted, the taint will be temporarily removed. As a result, the node, which previously had some pods evicted due to resource pressure, may be considered for scheduling again, and the pods will be rescheduled to that node. However, if the resource pressure on the node continues, the eviction process will be triggered once more.

Possible Cause

kubelet reports memory, disk, and PID pressure (heartbeats) based on the detection of eviction manager. This reporting process and detection are carried out by two goroutines simultaneously. Under normal circumstances, if the detection of the eviction manager happens before the heartbeat reporting, kubelet can accurately report the disk status without removing any taint. In abnormal cases where the heartbeat reporting occurs before the eviction manager detection, kubelet will remove the last taint.

Solution

There is no need for you to address this issue. The node will automatically remove the pods after a certain period of time.