Updated on 2026-02-05 GMT+08:00

CCE Node Problem Detector

Introduction

The CCE Node Problem Detector add-on (formerly NPD) monitors abnormal events of cluster nodes and can connect to a third-party monitoring platform. It is a daemon running on each node. It collects node issues from different daemons and reports them to the API server. It can run as a DaemonSet workload or separately.

The CCE Node Problem Detector add-on is developed based on the open-source project node-problem-detector. For details, see node-problem-detector.

Notes and Constraints

  • When using CCE Node Problem Detector, do not format or partition node disks.
  • Each CCE Node Problem Detector process occupies 30m CPUs and 100 MiB of memory.
  • If the CCE Node Problem Detector version is 1.18.45 or later, the EulerOS version of the host machine must be 2.5 or later.

Permissions

To monitor kernel logs, the CCE Node Problem Detector add-on needs to read the host /dev/kmsg. Therefore, the privileged containers must be enabled. For details, see privileged.

In addition, CCE mitigates risks according to the least privilege principle. Only the following privileges are available for CCE Node Problem Detector running:

  • cap_dac_read_search: permission to access /run/log/journal.
  • cap_sys_admin: permission to access /dev/kmsg.

Installing the Add-on

  1. Log in to the CCE console and click the cluster name to access the cluster console.
  2. In the navigation pane, choose Add-ons. In the right pane, find the CCE Node Problem Detector add-on and click Install.
  3. In the Install Add-on sliding window, configure the specifications as needed.

    You can adjust the number of add-on pods and resource quotas as required. High availability is not possible with a single pod. If an error occurs on the node where the add-on pod runs, the add-on will fail.

  4. Configure the add-on parameters.

    Maximum Number of Isolated Nodes in a Fault: specifies the maximum number of nodes that can be isolated to prevent avalanches in case of a fault occurring on multiple nodes. You can configure this parameter either by percentage or quantity.

  5. Configure deployment policies for the add-on pods.

    • Scheduling policies do not take effect on the DaemonSet pods of the add-on.
    • When configuring multi-AZ deployment or node affinity, ensure that there are nodes meeting the scheduling policy and that resources are sufficient in the cluster. Otherwise, the add-on cannot run.
    Table 1 Configurations for add-on scheduling

    Parameter

    Description

    Multi-AZ Deployment

    • Preferred: Deployment pods of the add-on will be preferentially scheduled to nodes in different AZs. If all the nodes in the cluster are deployed in the same AZ, the pods will be scheduled to different nodes in that AZ.
    • Equivalent mode: Deployment pods of the add-on are evenly scheduled to the nodes in the cluster in each AZ. If a new AZ is added, you are advised to increase add-on pods for cross-AZ HA deployment. With the Equivalent multi-AZ deployment, the difference between the number of add-on pods in different AZs will be less than or equal to 1. If resources in one of the AZs are insufficient, pods cannot be scheduled to that AZ.
    • Forcible: Deployment pods of the add-on are forcibly scheduled to nodes in different AZs. There can be at most one pod in each AZ. If nodes in a cluster are not in different AZs, some add-on pods cannot run properly. If a node is faulty, the add-on pods on it may fail to be migrated.

    Node Affinity

    • Not configured: Node affinity is disabled for the add-on.
    • Specify node: Specify the nodes where the add-on is deployed. If you do not specify the nodes, the add-on pods will be randomly scheduled based on the default cluster scheduling policy.
    • Specify node pool: Specify the node pool where the add-on pods are deployed. If you do not specify the node pools, the add-on pods will be randomly scheduled based on the default cluster scheduling policy.
    • Customize affinity: Enter the labels of the nodes where the add-on pods are to be deployed for more flexible scheduling policies. If you do not specify node labels, the add-on pods will be randomly scheduled based on the default cluster scheduling policy.

      If multiple custom affinity policies are configured, ensure that there are nodes that meet all the affinity policies in the cluster. Otherwise, the add-on cannot run.

    Toleration

    Using both taints and tolerations allows (not forcibly) the add-on Deployment to be scheduled to a node with the matching taints, and controls the Deployment eviction policies after the node where the Deployment is located is tainted.

    The add-on adds the default toleration policy for the node.kubernetes.io/not-ready and node.kubernetes.io/unreachable taints, respectively. The tolerance time window is 60s.

    For details, see Configuring Tolerance Policies.

  6. Click Install.

Components

Table 2 Add-on components

Component

Description

Resource Type

node-problem-controller

Isolate faults basically based on fault detection results.

Deployment

node-problem-detector

Detect node faults.

DaemonSet

CCE Node Problem Detector Check Items

Check items are supported only in the add-on 1.16.0 and later versions.

Check items cover events and statuses.

  • Event-related

    For event-related check items, when a problem occurs, CCE Node Problem Detector reports an event to the API server. The event type can be Normal (normal event) or Warning (abnormal event).

    Table 3 Event-related check items

    Check Item

    Function

    Description

    OOMKilling

    Listen to the kernel logs and check whether there are any OOM events. If there is an OOM event, the component will report it.

    Typical scenario: The memory used by the process in the container exceeds the limit, triggering OOM and terminating the process.

    Warning event

    Listening object: /dev/kmsg

    Matching rule: "Killed process \\d+ (.+) total-vm:\\d+kB, anon-rss:\\d+kB, file-rss:\\d+kB.*"

    TaskHung

    Listen to the kernel logs and check whether there are any taskHung events. If there is a taskHung event, the component will report it.

    Typical scenario: Disk I/O suspension causes process suspension.

    Warning event

    Listening object: /dev/kmsg

    Matching rule: "task \\S+:\\w+ blocked for more than \\w+ seconds\\."

    ReadonlyFilesystem

    Listen to the kernel logs and check whether there is a Remount root filesystem read-only error in the system kernel.

    Typical scenario: A user detaches a data disk from a node by mistake on the ECS, and applications continuously write data to the mount point of the data disk. As a result, an I/O error occurs in the kernel and the disk is remounted as a read-only disk.

    NOTE:

    If a node's rootfs uses Device Mapper and the data disk is detached from the node, the thin pool will malfunction. This will affect CCE Node Problem Detector, and the add-on will not be able to detect node faults.

    Warning event

    Listening object: /dev/kmsg

    Matching rule: Remounting filesystem read-only

  • Status-related

    For status-related check items, when a problem occurs, CCE Node Problem Detector reports an event to the API server and changes the node status synchronously. This function can be used together with Node-problem-controller fault isolation to isolate nodes.

    If the check period is not specified in the following check items, the default period is 30 seconds.

    Table 4 Checking system components

    Check Item

    Function

    Description

    Container network component error

    CNIProblem

    Check the status of the CNI components (container network components).

    None

    Container runtime component error

    CRIProblem

    Check the status of Docker and containerd of the CRI components (container runtime components).

    Check object: Docker or containerd

    Frequent restarts of kubelet

    FrequentKubeletRestart

    Periodically backtrack system logs to check whether kubelet restarts frequently.

    • Default threshold: 10 restarts within 10 minutes

      If kubelet restarts 10 times within 10 minutes, a fault alarm will be generated.

    • Listening object: logs in the /run/log/journal directory
    NOTE:

    Ubuntu and Huawei Cloud EulerOS 2.0 do not support these check items due to incompatible log formats.

    Frequent restarts of Docker

    FrequentDockerRestart

    Periodically backtrack system logs to check whether Docker restarts frequently.

    Frequent restarts of containerd

    FrequentContainerdRestart

    Periodically backtrack system logs to check whether containerd restarts frequently.

    kubelet error

    KubeletProblem

    Check the status of kubelet.

    None

    kube-proxy error

    KubeProxyProblem

    Check the status of kube-proxy.

    None

    Table 5 Checking system metrics

    Check Item

    Function

    Description

    Conntrack table full

    ConntrackFullProblem

    Check whether the conntrack table is full.

    • Default threshold: 90%
    • Usage: nf_conntrack_count
    • Maximum value: nf_conntrack_max

    Insufficient disk resources

    DiskProblem

    Check the usage of the system disk and CCE data disks (including the CRI logical disk and kubelet logical disk) on nodes.

    • Default threshold: 90%
    • Source:
      df -h

    Currently, additional data disks are not supported.

    Insufficient file handles

    FDProblem

    Check if the FD file handles are used up.

    • Default threshold: 90%
    • Usage: the first value in /proc/sys/fs/file-nr
    • Maximum value: the third value in /proc/sys/fs/file-nr

    Insufficient node memory

    MemoryProblem

    Check whether memory is used up.

    • Default threshold: 80%
    • Usage: MemTotal-MemAvailable in /proc/meminfo
    • Maximum value: MemTotal in /proc/meminfo

    Insufficient process resources

    PIDProblem

    Check whether PID process resources are exhausted.

    • Default threshold: 90%
    • Usage: denominator of the fourth value in /proc/loadavg, which indicates the total number of processes that can run
    • Maximum value: smaller value between /proc/sys/kernel/pid_max and /proc/sys/kernel/threads-max.
    Table 6 Checking the storage

    Check Item

    Function

    Description

    Disk read-only

    DiskReadonly

    Periodically perform write tests on the system disk and CCE data disks (including the CRI logical disk and kubelet logical disk) on nodes to check the availability of key disks.

    Detection paths:

    • /mnt/paas/kubernetes/kubelet/
    • /var/lib/docker/
    • /var/lib/containerd/
    • /var/paas/sys/log/kubernetes

    The temporary file npd-disk-write-ping is generated in the detection path.

    Other data disks except the system and CCE data disks, including the CRI and kubelet logical disk, on nodes cannot be checked by now.

    emptyDir storage pool error

    EmptyDirVolumeGroupStatusError

    Check whether the ephemeral volume groups on nodes are normal.

    Impact: Pods that depend on the storage pool cannot write data to the temporary volume. The temporary volume is remounted as a read-only file system by the kernel due to an I/O error.

    Typical scenario: When creating a node, a user configures two data disks as an ephemeral volume storage pool. Some data disks are deleted by mistake. As a result, the storage pool becomes abnormal.

    • Detection period: 30s
    • Source:
      vgs -o vg_name, vg_attr
    • Principle: Check whether the VG (storage pool) is in the P state. If yes, some PVs (data disks) are lost.
    • Joint scheduling: The scheduler can automatically identify a PV storage pool error and prevent pods that depend on the storage pool from being scheduled to the node.
    • Exceptional scenario: CCE Node Problem Detector cannot detect VG (storage pool) loss caused by the loss of all PVs (data disks). In this case, kubelet automatically isolates the node, detects the loss of VGs (storage pools), and updates the corresponding resources in nodestatus.allocatable to 0. This prevents pods that depend on the storage pool from being scheduled to the node. Damage to a single PV cannot be detected by this check item, but it can be detected by the ReadonlyFilesystem check item.

    PV storage pool error

    LocalPvVolumeGroupStatusError

    Check the PV groups on nodes.

    Impact: Pods that depend on the storage pool cannot write data to the persistent volume. The persistent volume is remounted as a read-only file system by the kernel due to an I/O error.

    Typical scenario: When creating a node, a user configures two data disks as a persistent volume storage pool. Some data disks are deleted by mistake.

    Mount point error

    MountPointProblem

    Check the mount points on nodes.

    Definition: You cannot access a mount point by running the cd command.

    Typical scenario: Network File System (NFS), for example, obsfs and s3fs, is mounted to a node. When the connection is abnormal due to network or peer NFS server exceptions, all processes that access the mount point are suspended. For example, during a cluster upgrade, the kubelet is restarted, and all mount points are scanned. If the abnormal mount point is detected, the upgrade fails.

    Alternatively, you can run the following command:

    for dir in `df -h | grep -v "Mounted on" | awk "{print \\$NF}"`;do cd $dir; done && echo "ok"

    Suspended disk I/O

    DiskHung

    Check whether I/O suspension occurs on all nodes' disks, that is, whether I/O read and write operations are not responded.

    Definition of I/O suspension: The system does not respond to disk I/O requests, and some processes are in the D state.

    Typical scenario: Disks cannot respond due to abnormal OS hard disk drivers or severe faults on the underlying network.

    • Check object: all data disks
    • Source:

      /proc/diskstat

      Alternatively, you can run the following command:
      iostat -xmt 1
    • Thresholds: (All following conditions must be met.)
      • Average usage (ioutil) ≥ 0.99
      • Average I/O queue length (avgqu-sz) ≥ 1
      • Average I/O transfer volume ≤ 1

        Average I/O transfer volume = Number of writes completed per second (iops, unit: w/s) + Amount of data written per second (ioth, unit: wMB/s)

      NOTE:

      In some OSs, no data changes during I/O suspension. In this case, calculate the CPU I/O time usage (iowait > 0.8).

    Slow disk I/O

    DiskSlow

    Check whether all nodes' disks have slow I/Os, that is, whether I/Os respond slowly.

    Typical scenario: EVS disks have slow I/Os due to network fluctuation.

    • Check object: all data disks
    • Source:

      /proc/diskstat

      Alternatively, you can run the following command:
      iostat -xmt 1
    • Default threshold:

      Average I/O latency (await) ≥ 5000 ms

    NOTE:

    This check item is invalid during I/O suspension. It is because CCE does not respond to any I/O requests and await is not updated.

    Table 7 Other check items

    Check Item

    Function

    Description

    Abnormal NTP

    NTPProblem

    Check whether the node clock synchronization service ntpd or chronyd is running properly and whether there is a system time drift.

    Default clock offset threshold: 8000 ms

    Process D error

    ProcessD

    Check whether there is any process in the D state on nodes.

    Default threshold: 10 abnormal processes detected for three consecutive times

    Source:

    • /proc/{PID}/stat
    • Alternately, you can run the ps aux command.

    Exceptional scenario: The ProcessD check item ignores the resident D processes (heartbeat and update) on which the SDI drivers on BMS nodes depend.

    Process Z error

    ProcessZ

    Check whether there is any process in the Z state on nodes.

    RDMA network interface error

    Check the RDMA network interface status.

    NOTE:

    CCE Node Problem Detector 1.19.37 and later versions support the RDMA network interface error detection function and mark the RDMAProblem status on the node. It automatically writes this status to the node object. If the add-on is rolled back to an earlier version that does not support this function, CCE Node Problem Detector cannot clear the status. As a result, the marked status is retained.

    Default threshold: one RDMA network interface error detected for one consecutive time

    Source:

    • Command: rdma link show

    ResolvConf error

    ResolvConfFileProblem

    Check whether the ResolvConf file is lost.

    Check whether the ResolvConf file is normal.

    Definition: No upstream domain name resolution server (nameserver) is included.

    Check object: /etc/resolv.conf

    Existing scheduled event

    ScheduledEvent

    Check whether there is any live migration event on nodes. A live migration event is usually triggered by a hardware fault and is an automatic fault rectification method at the IaaS layer.

    Typical scenario: The host is faulty. For example, the fan is damaged or the disk has bad sectors. As a result, a live migration is triggered for VMs.

    Source:

    • http://169.254.169.254/meta-data/latest/events/scheduled

    This check item is an Alpha feature and is disabled by default.

    The spot price node is being reclaimed.

    SpotPriceNodeReclaimNotification

    Check whether any spot price node is interrupted and reclaimed due to preemption.

    Default check interval: 120 seconds

    Default fault handling policy: Evict some workloads on the nodes.

    The kubelet component has the following default check items, which have bugs or defects. You can fix them by upgrading the cluster or using CCE Node Problem Detector.

    Table 8 Default kubelet check items

    Check Item

    Function

    Description

    Insufficient PIDs

    PIDPressure

    Check whether PIDs are sufficient.

    • Interval: 10 seconds
    • Threshold: 90%
    • Defect: In community version 1.23.1 and earlier, this check item becomes invalid when over 65,535 PIDs are used. For details, see issue 107107. In community version 1.24 and earlier, thread-max is not considered in this check item.

    Insufficient memory

    MemoryPressure

    Check whether the allocable memory for the containers is sufficient.

    • Interval: 10 seconds
    • Threshold: Maximum value – 100 MiB
    • Allocable = Total memory on a node – Reserved memory on a node
    • Defect: This check item checks only the allocatable memory of containers and does not check that on the node.

    Insufficient disk space

    DiskPressure

    Check the disk usage and inode usage of the kubelet and Docker disks.

    • Interval: 10 seconds
    • Threshold: 90%

Node-problem-controller Fault Isolation

Fault isolation is supported only by CCE Node Problem Detector of 1.16.0 and later.

By default, if multiple nodes become faulty, node-problem-controller (NPC) adds taints to up to 10% of the nodes. You can set npc.maxTaintedNode to increase the threshold.

The open-source NPD provides fault detection but not fault isolation. CCE enhances the NPC based on the open-source NPD. NPC is based on the Kubernetes node controller. For faults reported by NPD, NPC automatically adds taints to the nodes with faults to isolate them.

Table 9 Parameters

Parameter

Description

Default Value

npc.enable

Whether to enable NPC

This parameter is not supported in 1.18.0 or later versions.

true

npc.maxTaintedNode

How many nodes that NPC can add taints to when multiple nodes have the same fault. This can minimize the impact.

The value can be in int or percentage format.

10%

Value range:

  • The value is in int format and ranges from 1 to infinity.
  • The value ranges from 1% to 100%, in percentage. The minimum value of this parameter multiplied by the number of cluster nodes is 1.

npc.nodeAffinity

Node affinity of the controller

N/A

Viewing CCE Node Problem Detector Events

Events reported by the CCE Node Problem Detector add-on can be queried on the Nodes tab.

  1. Log in to the CCE console and click the cluster name to access the cluster console.
  2. In the navigation pane, choose Nodes. In the right pane, click the Nodes tab, locate the row containing the target node, and click View Events in the Operation column.

    Figure 1 Viewing node events

Configuring CCE Node Problem Detector Metric Alarms

For CCE Node Problem Detector status-related check items, you can configure alarm rules to notify you of exceptions by SMS message or email. For details about how to create a custom alarm rule, see Configuring Alarms in Alarm Center.

To use CCE Node Problem Detector check items to configure alarm rules, you need to install the Cloud Native Cluster Monitoring add-on in the cluster and interconnect the add-on with an AOM instance.

Collecting Prometheus Metrics

The DaemonSet pods of CCE Node Problem Detector expose Prometheus metrics over port 19901. By default, the add-on pods are added with the annotation metrics.alpha.kubernetes.io/custom-endpoints: '[{"api":"prometheus","path":"/metrics","port":"19901","names":""}]'. You can build a Prometheus collector to identify and obtain CCE Node Problem Detector metrics from http://{{CCE-Node-Problem-Detector-pod-IP-address}}:{{CCE-Node-Problem-Detector-pod-port}}/metrics.

If the CCE Node Problem Detector add-on version is earlier than 1.16.5, the exposed port of Prometheus metrics is 20257.

The metric data includes problem_counter and problem_gauge, as shown below.

# HELP problem_counter Number of times a specific type of problem has occurred.
# TYPE problem_counter counter
problem_counter{reason="DockerHung"} 0
problem_counter{reason="DockerStart"} 0
problem_counter{reason="EmptyDirVolumeGroupStatusError"} 0
...
# HELP problem_gauge Whether a specific type of problem is affecting the node or not.
# TYPE problem_gauge gauge
problem_gauge{reason="CNIIsDown",type="CNIProblem"} 0
problem_gauge{reason="CNIIsUp",type="CNIProblem"} 0
problem_gauge{reason="CRIIsDown",type="CRIProblem"} 0
problem_gauge{reason="CRIIsUp",type="CRIProblem"} 0
..

Helpful Links

After installing the CCE Node Problem Detector add-on, you can customize the node fault detection policies, such as filtering node detection scope or adjusting the fault threshold. For details, see Configuring Node Fault Detection Policies.

Release History

Table 10 CCE Node Problem Detector add-on

Add-on Version

Supported Cluster Version

New Feature

Community Version

1.19.52

v1.28

v1.29

v1.30

v1.31

v1.32

v1.33

v1.34

Fixed some issues.

0.8.10

1.19.39

v1.28

v1.29

v1.30

v1.31

v1.32

v1.33

v1.34

CCE clusters v1.34 are supported.

0.8.10

1.19.37

v1.27

v1.28

v1.29

v1.30

v1.31

v1.32

v1.33

Supported RDMA network interface status detection.

0.8.10

1.19.33

v1.27

v1.28

v1.29

v1.30

v1.31

v1.32

v1.33

Fixed some issues.

0.8.10

1.19.29

v1.27

v1.28

v1.29

v1.30

v1.31

v1.32

v1.33

CCE clusters v1.33 are supported.

0.8.10

1.19.25

v1.25

v1.27

v1.28

v1.29

v1.30

v1.31

v1.32

CCE clusters v1.32 are supported.

0.8.10

1.19.20

v1.25

v1.27

v1.28

v1.29

v1.30

v1.31

Fixed some issues.

0.8.10

1.19.16

v1.21

v1.23

v1.25

v1.27

v1.28

v1.29

v1.30

v1.31

CCE clusters v1.31 are supported.

0.8.10

1.19.11

v1.21

v1.23

v1.25

v1.27

v1.28

v1.29

v1.30

Fixed some issues.

0.8.10

1.19.8

v1.21

v1.23

v1.25

v1.27

v1.28

v1.29

v1.30

  • Compatible with a single system disk.
  • Supported anti-affinity scheduling of add-on pods on nodes in different AZs.
  • Certain taints can be added to a spot price ECS before its release for the node to evict its pods.
  • Synchronized time zones used by the add-on and the nodes.
  • CCE clusters v1.30 are supported.

0.8.10

1.19.1

v1.21

v1.23

v1.25

v1.27

v1.28

v1.29

Fixed some issues.

0.8.10

1.19.0

v1.21

v1.23

v1.25

v1.27

v1.28

Fixed some issues.

0.8.10

1.18.48

v1.21

v1.23

v1.25

v1.27

v1.28

Fixed some issues.

0.8.10

1.18.46

v1.21

v1.23

v1.25

v1.27

v1.28

CCE clusters v1.28 are supported.

0.8.10

1.18.22

v1.19

v1.21

v1.23

v1.25

v1.27

None

0.8.10

1.18.14

v1.19

v1.21

v1.23

v1.25

  • Supported anti-affinity scheduling of add-on pods on nodes in different AZs.
  • Certain taints can be added to a spot price ECS before its release for the node to evict its pods.
  • Synchronized time zones used by the add-on and the nodes.

0.8.10

1.18.10

v1.19

v1.21

v1.23

v1.25

  • Optimized the configuration page.
  • Added threshold configuration to the DiskSlow check item.
  • Added threshold configuration to the NTPProblem check item.
  • Supported anti-affinity scheduling of add-on pods on nodes in different AZs.
  • Interruption of spot price ECSs can be detected, which allows the pods on these nodes to be evicted before the interruption.

0.8.10

1.17.4

v1.17

v1.19

v1.21

v1.23

v1.25

Optimized DiskHung check item.

0.8.10

1.17.3

v1.17

v1.19

v1.21

v1.23

v1.25

  • The maximum number of nodes that can be tainted by NPC can be configured as a percentage.
  • Added the ProcessZ check item.
  • Added the time deviation detection to the NTPProblem check item.
  • Fixed the process consistently in a D state on BMS nodes.

0.8.10

1.17.2

v1.17

v1.19

v1.21

v1.23

v1.25

  • Added the DiskHung check item to disk I/O suspension.
  • Added the DiskSlow check item to disk I/O suspension.
  • Added the ProcessD check item.
  • Added MountPointProblem to check the health of mount points.
  • To avoid conflicts with the Service ports, the default health check listening port is changed to 19900, and the default Prometheus metric exposure port is changed to 19901.
  • Clusters 1.25 are supported.

0.8.10

1.16.4

v1.17

v1.19

v1.21

v1.23

  • Added the beta check item ScheduledEvent to detect cold and live VM migration events caused by host machine exceptions using the metadata API. This check item is disabled by default.

0.8.10

1.16.3

v1.17

v1.19

v1.21

v1.23

Added the function of checking the ResolvConf configuration file.

0.8.10

1.16.1

v1.17

v1.19

v1.21

v1.23

  • Added node-problem-controller for basic fault isolation.
  • Added the PID, FD, disk, memory, temporary volume pool, and PV pool check items.

0.8.10

1.15.0

v1.17

v1.19

v1.21

v1.23

  • Strengthened check items comprehensively in order to prevent false positives.
  • Supported kernel checks and reporting of OOMKilled and TaskHung events.

0.8.10

1.14.11

v1.17

v1.19

v1.21

CCE clusters v1.21 are supported.

0.7.1

1.14.5

v1.17

v1.19

Fixed the issue where monitoring metrics cannot be obtained.

0.7.1

1.14.4

v1.17

v1.19

  • Supported containerd nodes.

0.7.1

1.14.2

v1.17

v1.19

  • CCE clusters v1.19 are supported.
  • Supported Ubuntu OS and secure containers.

0.7.1

1.13.8

v1.15.11

v1.17

  • Fixed the CNI health check issue in a container tunnel network.
  • Adjusted resource quotas.

0.7.1

1.13.6

v1.15.11

v1.17

Fixed the issue where zombie processes are not reclaimed.

0.7.1

1.13.5

v1.15.11

v1.17

Added taints and tolerations.

0.7.1

1.13.2

v1.15.11

v1.17

Added resource limits and enhanced the detection capability of the CNI add-on.

0.7.1