Updated on 2024-11-11 GMT+08:00

CCE Node Problem Detector

Introduction

CCE Node Problem Detector (NPD) is an add-on that monitors abnormal events of cluster nodes and connects to a third-party monitoring platform. It is a daemon running on each node. It collects node issues from different daemons and reports them to the API server. This add-on can run as a DaemonSet or a daemon.

For more information, see node-problem-detector.

Notes and Constraints

  • When using this add-on, do not format or partition node disks.
  • Each NPD process occupies 30 m CPU and 100 MiB of memory.
  • If the NPD version is 1.18.45 or later, the EulerOS version of the host machine must be 2.5 or later.

Permissions

To monitor kernel logs, the NPD add-on needs to read the host /dev/kmsg. Therefore, the privileged mode must be enabled. For details, see privileged.

In addition, CCE mitigates risks according to the least privilege principle. Only the following privileges are available for NPD running:

  • cap_dac_read_search: permission to access /run/log/journal.
  • cap_sys_admin: permission to access /dev/kmsg.

Installing the Add-on

  1. Log in to the CCE console and click the cluster name to access the cluster console. In the navigation pane, choose Add-ons, locate CCE Node Problem Detector on the right, and click Install.
  2. On the Install Add-on page, configure the specifications as needed.

    You can adjust the number of add-on instances and resource quotas as required. High availability is not possible with a single pod. If an error occurs on the node where the add-on instance runs, the add-on will fail.

  3. Configure the add-on parameters.

    Maximum Number of Isolated Nodes in a Fault: specifies the maximum number of nodes that can be isolated to prevent avalanches in case of a fault occurring on multiple nodes. You can configure this parameter either by percentage or quantity.

  4. Configure deployment policies for the add-on pods.

    • Scheduling policies do not take effect on add-on instances of the DaemonSet type.
    • When configuring multi-AZ deployment or node affinity, ensure that there are nodes meeting the scheduling policy and that resources are sufficient in the cluster. Otherwise, the add-on cannot run.
    Table 1 Configurations for add-on scheduling

    Parameter

    Description

    Multi AZ

    • Preferred: Deployment pods of the add-on will be preferentially scheduled to nodes in different AZs. If all the nodes in the cluster are deployed in the same AZ, the pods will be scheduled to different nodes in that AZ.
    • Equivalent mode: Deployment pods of the add-on are evenly scheduled to the nodes in the cluster in each AZ. If a new AZ is added, you are advised to increase add-on pods for cross-AZ HA deployment. With the Equivalent multi-AZ deployment, the difference between the number of add-on pods in different AZs will be less than or equal to 1. If resources in one of the AZs are insufficient, pods cannot be scheduled to that AZ.
    • Required: Deployment pods of the add-on are forcibly scheduled to nodes in different AZs. There can be at most one pod in each AZ. If nodes in a cluster are not in different AZs, some add-on pods cannot run properly. If a node is faulty, add-on pods on it may fail to be migrated.

    Node Affinity

    • Not configured: Node affinity is disabled for the add-on.
    • Node Affinity: Specify the nodes where the add-on is deployed. If you do not specify the nodes, the add-on will be randomly scheduled based on the default cluster scheduling policy.
    • Specified Node Pool Scheduling: Specify the node pool where the add-on is deployed. If you do not specify the node pool, the add-on will be randomly scheduled based on the default cluster scheduling policy.
    • Custom Policies: Enter the labels of the nodes where the add-on is to be deployed for more flexible scheduling policies. If you do not specify node labels, the add-on will be randomly scheduled based on the default cluster scheduling policy.

      If multiple custom affinity policies are configured, ensure that there are nodes that meet all the affinity policies in the cluster. Otherwise, the add-on cannot run.

    Toleration

    Using both taints and tolerations allows (not forcibly) the add-on Deployment to be scheduled to a node with the matching taints, and controls the Deployment eviction policies after the node where the Deployment is located is tainted.

    The add-on adds the default tolerance policy for the node.kubernetes.io/not-ready and node.kubernetes.io/unreachable taints, respectively. The tolerance time window is 60s.

    For details, see Configuring Tolerance Policies.

  5. Click Install.

Components

Table 2 Add-on components

Component

Description

Resource Type

node-problem-controller

Isolate faults basically based on fault detection results.

Deployment

node-problem-detector

Detect node faults.

DaemonSet

NPD Check Items

Check items are supported only in 1.16.0 and later versions.

Check items cover events and statuses.

  • Event-related

    For event-related check items, when a problem occurs, NPD reports an event to the API server. The event type can be Normal (normal event) or Warning (abnormal event).

    Table 3 Event-related check items

    Check Item

    Function

    Description

    OOMKilling

    Listen to the kernel logs and check whether OOM events occur and are reported.

    Typical scenario: When the memory usage of a process in a container exceeds the limit, OOM is triggered and the process is terminated.

    Warning event

    Listening object: /dev/kmsg

    Matching rule: "Killed process \\d+ (.+) total-vm:\\d+kB, anon-rss:\\d+kB, file-rss:\\d+kB.*"

    TaskHung

    Listen to the kernel logs and check whether taskHung events occur and are reported.

    Typical scenario: Disk I/O suspension causes process suspension.

    Warning event

    Listening object: /dev/kmsg

    Matching rule: "task \\S+:\\w+ blocked for more than \\w+ seconds\\."

    ReadonlyFilesystem

    Check whether the Remount root filesystem read-only error occurs in the system kernel by listening to the kernel logs.

    Typical scenario: A user detaches a data disk from a node by mistake on the ECS, and applications continuously write data to the mount point of the data disk. As a result, an I/O error occurs in the kernel and the disk is remounted as a read-only disk.

    NOTE:

    If the rootfs of node pods is of the device mapper type, an error will occur in the thin pool if a data disk is detached. This will affect NPD and NPD will not be able to detect node faults.

    Warning event

    Listening object: /dev/kmsg

    Matching rule: Remounting filesystem read-only

  • Status-related

    For status-related check items, when a problem occurs, NPD reports an event to the API server and changes the node status synchronously. This function can be used together with Node-problem-controller fault isolation to isolate nodes.

    If the check period is not specified in the following check items, the default period is 30 seconds.

    Table 4 Checking system components

    Check Item

    Function

    Description

    Container network component error

    CNIProblem

    Check the status of the CNI components (container network components).

    None

    Container runtime component error

    CRIProblem

    Check the status of Docker and containerd of the CRI components (container runtime components).

    Check object: Docker or containerd

    Frequent restarts of Kubelet

    FrequentKubeletRestart

    Periodically backtrack system logs to check whether the key component Kubelet restarts frequently.

    • Default threshold: 10 restarts within 10 minutes

      If Kubelet restarts for 10 times within 10 minutes, it indicates that the system restarts frequently and a fault alarm is generated.

    • Listening object: logs in the /run/log/journal directory
    NOTE:

    The Ubuntu and HCE 2.0 OSs do not support the preceding check items due to incompatible log formats.

    Frequent restarts of Docker

    FrequentDockerRestart

    Periodically backtrack system logs to check whether the container runtime Docker restarts frequently.

    Frequent restarts of containerd

    FrequentContainerdRestart

    Periodically backtrack system logs to check whether the container runtime containerd restarts frequently.

    kubelet error

    KubeletProblem

    Check the status of the key component Kubelet.

    None

    kube-proxy error

    KubeProxyProblem

    Check the status of the key component kube-proxy.

    None

    Table 5 Checking system metrics

    Check Item

    Function

    Description

    Conntrack table full

    ConntrackFullProblem

    Check whether the conntrack table is full.

    • Default threshold: 90%
    • Usage: nf_conntrack_count
    • Maximum value: nf_conntrack_max

    Insufficient disk resources

    DiskProblem

    Check the usage of the system disk and CCE data disks (including the CRI logical disk and kubelet logical disk) on the node.

    • Default threshold: 90%
    • Source:
      df -h

    Currently, additional data disks are not supported.

    Insufficient file handles

    FDProblem

    Check if the FD file handles are used up.

    • Default threshold: 90%
    • Usage: the first value in /proc/sys/fs/file-nr
    • Maximum value: the third value in /proc/sys/fs/file-nr

    Insufficient node memory

    MemoryProblem

    Check whether memory is used up.

    • Default threshold: 80%
    • Usage: MemTotal-MemAvailable in /proc/meminfo
    • Maximum value: MemTotal in /proc/meminfo

    Insufficient process resources

    PIDProblem

    Check whether PID process resources are exhausted.

    • Default threshold: 90%
    • Usage: nr_threads in /proc/loadavg
    • Maximum value: smaller value between /proc/sys/kernel/pid_max and /proc/sys/kernel/threads-max.
    Table 6 Checking the storage

    Check Item

    Function

    Description

    Disk read-only

    DiskReadonly

    Periodically perform write tests on the system disk and CCE data disks (including the CRI logical disk and Kubelet logical disk) of the node to check the availability of key disks.

    Detection paths:

    • /mnt/paas/kubernetes/kubelet/
    • /var/lib/docker/
    • /var/lib/containerd/
    • /var/paas/sys/log/cceaddon-npd/

    The temporary file npd-disk-write-ping is generated in the detection path.

    Currently, additional data disks are not supported.

    emptyDir storage pool error

    EmptyDirVolumeGroupStatusError

    Check whether the ephemeral volume group on the node is normal.

    Impact: Pods that depend on the storage pool cannot write data to the temporary volume. The temporary volume is remounted as a read-only file system by the kernel due to an I/O error.

    Typical scenario: When creating a node, a user configures two data disks as an ephemeral volume storage pool. Some data disks are deleted by mistake. As a result, the storage pool becomes abnormal.

    • Detection period: 30s
    • Source:
      vgs -o vg_name, vg_attr
    • Principle: Check whether the VG (storage pool) is in the P state. If yes, some PVs (data disks) are lost.
    • Joint scheduling: The scheduler can automatically identify a PV storage pool error and prevent pods that depend on the storage pool from being scheduled to the node.
    • Exceptional scenario: The NPD add-on cannot detect the loss of all PVs (data disks), resulting in the loss of VGs (storage pools). In this case, kubelet automatically isolates the node, detects the loss of VGs (storage pools), and updates the corresponding resources in nodestatus.allocatable to 0. This prevents pods that depend on the storage pool from being scheduled to the node. The damage of a single PV cannot be detected by this check item, but by the ReadonlyFilesystem check item.

    PV storage pool error

    LocalPvVolumeGroupStatusError

    Check the PV group on the node.

    Impact: Pods that depend on the storage pool cannot write data to the persistent volume. The persistent volume is remounted as a read-only file system by the kernel due to an I/O error.

    Typical scenario: When creating a node, a user configures two data disks as a persistent volume storage pool. Some data disks are deleted by mistake.

    Mount point error

    MountPointProblem

    Check the mount point on the node.

    Exceptional definition: You cannot access the mount point by running the cd command.

    Typical scenario: Network File System (NFS), for example, obsfs and s3fs is mounted to a node. When the connection is abnormal due to network or peer NFS server exceptions, all processes that access the mount point are suspended. For example, during a cluster upgrade, a kubelet is restarted, and all mount points are scanned. If the abnormal mount point is detected, the upgrade fails.

    Alternatively, you can run the following command:

    for dir in `df -h | grep -v "Mounted on" | awk "{print \\$NF}"`;do cd $dir; done && echo "ok"

    Suspended disk I/O

    DiskHung

    Check whether I/O suspension occurs on all disks on the node, that is, whether I/O read and write operations are not responded.

    Definition of I/O suspension: The system does not respond to disk I/O requests, and some processes are in the D state.

    Typical scenario: Disks cannot respond due to abnormal OS hard disk drivers or severe faults on the underlying network.

    • Check object: all data disks
    • Source:

      /proc/diskstat

      Alternatively, you can run the following command:
      iostat -xmt 1
    • Thresholds: (All following conditions must be met).
      • Average usage (ioutil) ≥ 0.99
      • Average I/O queue length (avgqu-sz) ≥ 1
      • Average I/O transfer volume ≤ 1

        Average I/O transfer volume = Number of writes completed per second (iops, unit: w/s) + Amount of data written per second (ioth, unit: wMB/s)

      NOTE:

      In some OSs, no data changes during I/O. In this case, calculate the CPU I/O time usage. The value of iowait should be greater than 0.8.

    Slow disk I/O

    DiskSlow

    Check whether all disks on the node have slow I/Os, that is, whether I/Os respond slowly.

    Typical scenario: EVS disks have slow I/Os due to network fluctuation.

    • Check object: all data disks
    • Source:

      /proc/diskstat

      Alternatively, you can run the following command:
      iostat -xmt 1
    • Default threshold:

      Average I/O latency (await) ≥ 5000 ms

    NOTE:

    If I/O requests are not responded and the await data is not updated, this check item is invalid.

    Table 7 Other check items

    Check Item

    Function

    Description

    Abnormal NTP

    NTPProblem

    Check whether the node clock synchronization service ntpd or chronyd is running properly and whether a system time drift is caused.

    Default clock offset threshold: 8000 ms

    Process D error

    ProcessD

    Check whether there is a process D on the node.

    Default threshold: 10 abnormal processes detected for three consecutive times

    Source:

    • /proc/{PID}/stat
    • Alternately, you can run the ps aux command.

    Exceptional scenario: The ProcessD check item ignores the resident D processes (heartbeat and update) on which the SDI driver on the BMS node depends.

    Process Z error

    ProcessZ

    Check whether the node has processes in Z state.

    ResolvConf error

    ResolvConfFileProblem

    Check whether the ResolvConf file is lost.

    Check whether the ResolvConf file is normal.

    Exceptional definition: No upstream domain name resolution server (nameserver) is included.

    Object: /etc/resolv.conf

    Existing scheduled event

    ScheduledEvent

    Check whether scheduled live migration events exist on the node. A live migration plan event is usually triggered by a hardware fault and is an automatic fault rectification method at the IaaS layer.

    Typical scenario: The host is faulty. For example, the fan is damaged or the disk has bad sectors. As a result, live migration is triggered for VMs.

    Source:

    • http://169.254.169.254/meta-data/latest/events/scheduled

    This check item is an Alpha feature and is disabled by default.

    The kubelet component has the following default check items, which have bugs or defects. You can fix them by upgrading the cluster or using NPD.

    Table 8 Default kubelet check items

    Check Item

    Function

    Description

    Insufficient PID resources

    PIDPressure

    Check whether PIDs are sufficient.

    • Interval: 10 seconds
    • Threshold: 90%
    • Defect: In community version 1.23.1 and earlier versions, this check item becomes invalid when over 65535 PIDs are used. For details, see issue 107107. In community version 1.24 and earlier versions, thread-max is not considered in this check item.

    Insufficient memory

    MemoryPressure

    Check whether the allocable memory for the containers is sufficient.

    • Interval: 10 seconds
    • Threshold: max. 100 MiB
    • Allocable = Total memory of a node – Reserved memory of a node
    • Defect: This check item checks only the memory consumed by containers, and does not consider that consumed by other elements on the node.

    Insufficient disk resources

    DiskPressure

    Check the disk usage and inodes usage of the kubelet and Docker disks.

    • Interval: 10 seconds
    • Threshold: 90%

Node-problem-controller Fault Isolation

Fault isolation is supported only by add-ons of 1.16.0 and later versions.

By default, if multiple nodes become faulty, NPC adds taints to up to 10% of the nodes. You can set npc.maxTaintedNode to increase the threshold.

The open source NPD plugin provides fault detection but not fault isolation. CCE enhances the node-problem-controller (NPC) based on the open source NPD. This component is implemented based on the Kubernetes node controller. For faults reported by NPD, NPC automatically adds taints to nodes for node fault isolation.

Table 9 Parameters

Parameter

Description

Default

npc.enable

Whether to enable NPC

This parameter is not supported in 1.18.0 or later versions.

true

npc.maxTaintedNode

The maximum number of nodes that NPC can add taints to when an individual fault occurs on multiple nodes for minimizing impact.

The value can be in int or percentage format.

10%

Value range:

  • The value is in int format and ranges from 1 to infinity.
  • The value ranges from 1% to 100%, in percentage. The minimum value of this parameter multiplied by the number of cluster nodes is 1.

npc.nodeAffinity

Node affinity of the controller

N/A

Viewing NPD Events

Events reported by the NPD add-on can be queried on the Nodes page.

  1. Log in to the CCE console.
  2. Click the cluster name to access the cluster console. Choose Nodes in the navigation pane.
  3. Locate the row that contains the target node, and click View Events.

    Figure 1 Viewing node events

Collecting Prometheus Metrics

The NPD daemon pod exposes Prometheus metric data on port 19901. By default, the NPD pod is added with the annotation metrics.alpha.kubernetes.io/custom-endpoints: '[{"api":"prometheus","path":"/metrics","port":"19901","names":""}]'. You can build a Prometheus collector to identify and obtain NPD metrics from http://{{NpdPodIP}}:{{NpdPodPort}}/metrics.

If the NPD add-on version is earlier than 1.16.5, the exposed port of Prometheus metrics is 20257.

Currently, the metric data includes problem_counter and problem_gauge, as shown below.

# HELP problem_counter Number of times a specific type of problem have occurred.
# TYPE problem_counter counter
problem_counter{reason="DockerHung"} 0
problem_counter{reason="DockerStart"} 0
problem_counter{reason="EmptyDirVolumeGroupStatusError"} 0
...
# HELP problem_gauge Whether a specific type of problem is affecting the node or not.
# TYPE problem_gauge gauge
problem_gauge{reason="CNIIsDown",type="CNIProblem"} 0
problem_gauge{reason="CNIIsUp",type="CNIProblem"} 0
problem_gauge{reason="CRIIsDown",type="CRIProblem"} 0
problem_gauge{reason="CRIIsUp",type="CRIProblem"} 0
..

Change History

Table 10 Release history

Add-on Version

Supported Cluster Version

New Feature

Community Version

1.19.1

v1.21

v1.23

v1.25

v1.27

v1.28

v1.29

Fixed some issues.

0.8.10

1.18.46

v1.21

v1.23

v1.25

v1.27

v1.28

CCE clusters 1.28 are supported.

0.8.10

1.18.22

v1.19

v1.21

v1.23

v1.25

v1.27

None

0.8.10

1.18.14

v1.19

v1.21

v1.23

v1.25

  • Supported anti-affinity scheduling of add-on pods on nodes in different AZs.
  • Allows adding a taint to a node before the release of a spot ECS for the node to repel a set of pods.
  • Synchronized time zones used by the add-on and the node.

0.8.10

1.18.10

v1.19

v1.21

v1.23

v1.25

  • Optimizes the configuration page.
  • Adds threshold configuration to the DiskSlow check item.
  • Added threshold configuration to the NTPProblem check item.
  • Supported anti-affinity scheduling of add-on pods on nodes in different AZs.
  • Supports interruption detection for spot ECSs and evicts pods on nodes before the interruption.

0.8.10

1.17.4

v1.17

v1.19

v1.21

v1.23

v1.25

Optimizes DiskHung check item.

0.8.10

1.17.3

v1.17

v1.19

v1.21

v1.23

v1.25

  • The maximum number of taint nodes that can be added to the NPC can be configured by percentage.
  • Added the ProcessZ check item.
  • Added the time deviation detection to the NTPProblem check item.
  • Fixed the processes consistently in the D state (exist in the BMS node).

0.8.10

1.17.2

v1.17

v1.19

v1.21

v1.23

v1.25

  • Added the DiskHung check item for disk I/O.
  • Added the DiskSlow check item for disk I/O.
  • Added the ProcessD check item.
  • Added MountPointProblem to check the health of mount points.
  • To avoid conflicts with the service port range, the default health check listening port is changed to 19900, and the default Prometheus metric exposure port is changed to 19901.
  • Supports clusters 1.25.

0.8.10

1.16.4

v1.17

v1.19

v1.21

v1.23

  • Adds the beta check item ScheduledEvent to detect cold and live VM migration events caused by host machine exceptions using the metadata API. This check item is disabled by default.

0.8.10

1.16.3

v1.17

v1.19

v1.21

v1.23

Adds the function of checking the ResolvConf configuration file.

0.8.10

1.16.1

v1.17

v1.19

v1.21

v1.23

  • Adds node-problem-controller. Supports basic fault isolation.
  • Adds the PID, FD, disk, memory, temporary volume pool, and PV pool check items.

0.8.10

1.15.0

v1.17

v1.19

v1.21

v1.23

  • Hardens check items comprehensively to avoid false positives.
  • Supports kernel check. Supports reporting of OOMKilled and TaskHung events.

0.8.10

1.14.11

v1.17

v1.19

v1.21

CCE clusters 1.21 are supported.

0.7.1

1.14.5

v1.17

v1.19

Fixes the issue that monitoring metrics cannot be obtained.

0.7.1

1.14.4

v1.17

v1.19

  • Supported containerd nodes.

0.7.1

1.14.2

v1.17

v1.19

  • CCE clusters 1.19 are supported.
  • Supported Ubuntu OS and Kata containers.

0.7.1

1.13.8

v1.15.11

v1.17

  • Fixes the CNI health check issue on the container tunnel network.
  • Adjusts resource quotas.

0.7.1

1.13.6

v1.15.11

v1.17

Fixes the issue that zombie processes are not reclaimed.

0.7.1

1.13.5

v1.15.11

v1.17

Added taint tolerance configuration.

0.7.1

1.13.2

v1.15.11

v1.17

Added resource limits and enhanced the detection capability of the cni add-on.

0.7.1