CCE Node Problem Detector
Introduction
CCE Node Problem Detector (node-problem-detector, NPD) is an add-on that monitors abnormal events of cluster nodes and connects to a third-party monitoring platform. It is a daemon running on each node. It collects node issues from different daemons and reports them to the API server. This add-on can run as a DaemonSet or a daemon.
For more information, see node-problem-detector.
Constraints
- When using this add-on, do not format or partition node disks.
- Each NPD process occupies 30 m CPU and 100 MB memory.
- If the NPD version is 1.18.45 or later, the EulerOS version of the host machine must be 2.5 or later.
Permissions
To monitor kernel logs, the NPD add-on needs to read the host /dev/kmsg. Therefore, the privileged mode must be enabled. For details, see privileged.
In addition, CCE mitigates risks according to the least privilege principle. Only the following privileges are available for NPD running:
- cap_dac_read_search: permission to access /run/log/journal.
- cap_sys_admin: permission to access /dev/kmsg.
Installing the Add-on
- Log in to the CCE console and click the cluster name to access the cluster console. Choose Add-ons in the navigation pane, locate CCE Node Problem Detector on the right, and click Install.
- On the Install Add-on page, configure the specifications.
Table 1 NPD configuration Parameter
Description
Add-on Specifications
The specifications can be Custom.
Pods
If you select Custom, you can adjust the number of pods as required.
Containers
If you select Custom, you can adjust the container specifications as required.
- Configure the add-on parameters.
Only v1.16.0 and later versions support the configurations.
Table 2 NPD parameters Parameter
Description
common.image.pullPolicy
An image pulling policy. The default value is IfNotPresent.
feature_gates
A feature gate
npc.maxTaintedNode
The maximum number of nodes that NPC can add taints to when a single fault occurs on multiple nodes for minimizing impact.
The value can be in int or percentage format.
npc.nodeAffinity
Node affinity of the controller
- Configure scheduling policies for the add-on.
- Scheduling policies do not take effect on add-on instances of the DaemonSet type.
- When configuring multi-AZ deployment or node affinity, ensure that there are nodes meeting the scheduling policy and that resources are sufficient in the cluster. Otherwise, the add-on cannot run.
Table 3 Configurations for add-on scheduling Parameter
Description
Multi AZ
- Preferred: Deployment pods of the add-on will be preferentially scheduled to nodes in different AZs. If all the nodes in the cluster are deployed in the same AZ, the pods will be scheduled to that AZ.
- Equivalent mode: Deployment pods of the add-on are evenly scheduled to the nodes in the cluster in each AZ. If a new AZ is added, you are advised to increase add-on pods for cross-AZ HA deployment. With the Equivalent multi-AZ deployment, the difference between the number of add-on pods in different AZs will be less than or equal to 1. If resources in one of the AZs are insufficient, pods cannot be scheduled to that AZ.
- Required: Deployment pods of the add-on will be forcibly scheduled to nodes in different AZs. If there are fewer AZs than pods, the extra pods will fail to run.
Node Affinity
- Not configured: Node affinity is disabled for the add-on.
- Node Affinity: Specify the nodes where the add-on is deployed. If you do not specify the nodes, the add-on will be randomly scheduled based on the default cluster scheduling policy.
- Specified Node Pool Scheduling: Specify the node pool where the add-on is deployed. If you do not specify the node pool, the add-on will be randomly scheduled based on the default cluster scheduling policy.
- Custom Policies: Enter the labels of the nodes where the add-on is to be deployed for more flexible scheduling policies. If you do not specify node labels, the add-on will be randomly scheduled based on the default cluster scheduling policy.
If multiple custom affinity policies are configured, ensure that there are nodes that meet all the affinity policies in the cluster. Otherwise, the add-on cannot run.
Toleration
Using both taints and tolerations allows (not forcibly) the add-on Deployment to be scheduled to a node with the matching taints, and controls the Deployment eviction policies after the node where the Deployment is located is tainted.
The add-on adds the default tolerance policy for the node.kubernetes.io/not-ready and node.kubernetes.io/unreachable taints, respectively. The tolerance time window is 60s.
For details, see Taints and Tolerations.
- Click Install.
Components
Component |
Description |
Resource Type |
---|---|---|
node-problem-controller |
Isolate faults basically based on fault detection results. |
Deployment |
node-problem-detector |
Detect node faults. |
DaemonSet |
NPD Check Items
Check items are supported only in 1.16.0 and later versions.
Check items cover events and statuses.
- Event-related
For event-related check items, when a problem occurs, NPD reports an event to the API server. The event type can be Normal (normal event) or Warning (abnormal event).
Table 5 Event-related check items Check Item
Function
Description
OOMKilling
Listen to the kernel logs and check whether OOM events occur and are reported.
Typical scenario: When the memory usage of a process in a container exceeds the limit, OOM is triggered and the process is terminated.
Warning event
Listening object: /dev/kmsg
Matching rule: "Killed process \\d+ (.+) total-vm:\\d+kB, anon-rss:\\d+kB, file-rss:\\d+kB.*"
TaskHung
Listen to the kernel logs and check whether taskHung events occur and are reported.
Typical scenario: Disk I/O suspension causes process suspension.
Warning event
Listening object: /dev/kmsg
Matching rule: "task \\S+:\\w+ blocked for more than \\w+ seconds\\."
ReadonlyFilesystem
Check whether the Remount root filesystem read-only error occurs in the system kernel by listening to the kernel logs.
Typical scenario: A user detaches a data disk from a node by mistake on the ECS, and applications continuously write data to the mount point of the data disk. As a result, an I/O error occurs in the kernel and the disk is remounted as a read-only disk.
NOTE:If the rootfs of node pods is of the device mapper type, an error will occur in the thin pool if a data disk is detached. This will affect NPD and NPD will not be able to detect node faults.
Warning event
Listening object: /dev/kmsg
Matching rule: Remounting filesystem read-only
- Status-related
For status-related check items, when a problem occurs, NPD reports an event to the API server and changes the node status synchronously. This function can be used together with Node-problem-controller fault isolation to isolate nodes.
If the check period is not specified in the following check items, the default period is 30 seconds.
Table 6 Checking system components Check Item
Function
Description
Container network component error
CNIProblem
Check the status of the CNI components (container network components).
None
Container runtime component error
CRIProblem
Check the status of Docker and containerd of the CRI components (container runtime components).
Check object: Docker or containerd
Frequent restarts of Kubelet
FrequentKubeletRestart
Periodically backtrack system logs to check whether the key component Kubelet restarts frequently.
Frequent restarts of Docker
FrequentDockerRestart
Periodically backtrack system logs to check whether the container runtime Docker restarts frequently.
Frequent restarts of containerd
FrequentContainerdRestart
Periodically backtrack system logs to check whether the container runtime containerd restarts frequently.
kubelet error
KubeletProblem
Check the status of the key component Kubelet.
None
kube-proxy error
KubeProxyProblem
Check the status of the key component kube-proxy.
None
Table 7 Checking system metrics Check Item
Function
Description
Conntrack table full
ConntrackFullProblem
Check whether the conntrack table is full.
- Default threshold: 90%
- Usage: nf_conntrack_count
- Maximum value: nf_conntrack_max
Insufficient disk resources
DiskProblem
Check the usage of the system disk and CCE data disks (including the CRI logical disk and kubelet logical disk) on the node.
- Default threshold: 90%
- Source:
df -h
Currently, additional data disks are not supported.
Insufficient file handles
FDProblem
Check if the FD file handles are used up.
- Default threshold: 90%
- Usage: the first value in /proc/sys/fs/file-nr
- Maximum value: the third value in /proc/sys/fs/file-nr
Insufficient node memory
MemoryProblem
Check whether memory is used up.
- Default threshold: 80%
- Usage: MemTotal-MemAvailable in /proc/meminfo
- Maximum value: MemTotal in /proc/meminfo
Insufficient process resources
PIDProblem
Check whether PID process resources are exhausted.
- Default threshold: 90%
- Usage: nr_threads in /proc/loadavg
- Maximum value: smaller value between /proc/sys/kernel/pid_max and /proc/sys/kernel/threads-max.
Table 8 Checking the storage Check Item
Function
Description
Disk read-only
DiskReadonly
Periodically perform write tests on the system disk and CCE data disks (including the CRI logical disk and Kubelet logical disk) of the node to check the availability of key disks.
Detection paths:
- /mnt/paas/kubernetes/kubelet/
- /var/lib/docker/
- /var/lib/containerd/
- /var/paas/sys/log/cceaddon-npd/
The temporary file npd-disk-write-ping is generated in the detection path.
Currently, additional data disks are not supported.
emptyDir storage pool error
EmptyDirVolumeGroupStatusError
Check whether the ephemeral volume group on the node is normal.
Impact: Pods that depend on the storage pool cannot write data to the temporary volume. The temporary volume is remounted as a read-only file system by the kernel due to an I/O error.
Typical scenario: When creating a node, a user configures two data disks as a temporary volume storage pool. Some data disks are deleted by mistake. As a result, the storage pool becomes abnormal.
- Detection period: 30s
- Source:
vgs -o vg_name, vg_attr
- Principle: Check whether the VG (storage pool) is in the P state. If yes, some PVs (data disks) are lost.
- Joint scheduling: The scheduler can automatically identify a PV storage pool error and prevent pods that depend on the storage pool from being scheduled to the node.
- Exceptional scenario: The NPD add-on cannot detect the loss of all PVs (data disks), resulting in the loss of VGs (storage pools). In this case, kubelet automatically isolates the node, detects the loss of VGs (storage pools), and updates the corresponding resources in nodestatus.allocatable to 0. This prevents pods that depend on the storage pool from being scheduled to the node. The damage of a single PV cannot be detected by this check item, but by the ReadonlyFilesystem check item.
PV storage pool error
LocalPvVolumeGroupStatusError
Check the PV group on the node.
Impact: Pods that depend on the storage pool cannot write data to the persistent volume. The persistent volume is remounted as a read-only file system by the kernel due to an I/O error.
Typical scenario: When creating a node, a user configures two data disks as a persistent volume storage pool. Some data disks are deleted by mistake.
Mount point error
MountPointProblem
Check the mount point on the node.
Exceptional definition: You cannot access the mount point by running the cd command.
Typical scenario: Network File System (NFS), for example, obsfs and s3fs is mounted to a node. When the connection is abnormal due to network or peer NFS server exceptions, all processes that access the mount point are suspended. For example, during a cluster upgrade, a kubelet is restarted, and all mount points are scanned. If the abnormal mount point is detected, the upgrade fails.
Alternatively, you can run the following command:
for dir in `df -h | grep -v "Mounted on" | awk "{print \\$NF}"`;do cd $dir; done && echo "ok"
Suspended disk I/O
DiskHung
Check whether I/O suspension occurs on all disks on the node, that is, whether I/O read and write operations are not responded.
Definition of I/O suspension: The system does not respond to disk I/O requests, and some processes are in the D state.
Typical scenario: Disks cannot respond due to abnormal OS hard disk drivers or severe faults on the underlying network.
- Check object: all data disks
- Source:
Alternatively, you can run the following command:
iostat -xmt 1
- Threshold:
- Average usage: ioutil >= 0.99
- Average I/O queue length: avgqu-sz >= 1
- Average I/O transfer volume: iops (w/s) + ioth (wMB/s) <= 1
NOTE:In some OSs, no data changes during I/O. In this case, calculate the CPU I/O time usage. The value of iowait should be greater than 0.8.
Slow disk I/O
DiskSlow
Check whether all disks on the node have slow I/Os, that is, whether I/Os respond slowly.
Typical scenario: EVS disks have slow I/Os due to network fluctuation.
- Check object: all data disks
- Source:
Alternatively, you can run the following command:
iostat -xmt 1
- Default threshold:
NOTE:If I/O requests are not responded and the await data is not updated, this check item is invalid.
Table 9 Other check items Check Item
Function
Description
Abnormal NTP
NTPProblem
Check whether the node clock synchronization service ntpd or chronyd is running properly and whether a system time drift is caused.
Default clock offset threshold: 8000 ms
Process D error
ProcessD
Check whether there is a process D on the node.
Default threshold: 10 abnormal processes detected for three consecutive times
Source:
- /proc/{PID}/stat
- Alternately, you can run the ps aux command.
Process Z error
ProcessZ
Check whether the node has processes in Z state.
ResolvConf error
ResolvConfFileProblem
Check whether the ResolvConf file is lost.
Check whether the ResolvConf file is normal.
Exceptional definition: No upstream domain name resolution server (nameserver) is included.
Object: /etc/resolv.conf
Existing scheduled event
ScheduledEvent
Check whether scheduled live migration events exist on the node. A live migration plan event is usually triggered by a hardware fault and is an automatic fault rectification method at the IaaS layer.
Typical scenario: The host is faulty. For example, the fan is damaged or the disk has bad sectors. As a result, live migration is triggered for VMs.
Source:
- http://169.254.169.254/meta-data/latest/events/scheduled
This check item is an Alpha feature and is disabled by default.
The kubelet component has the following default check items, which have bugs or defects. You can fix them by upgrading the cluster or using NPD.
Table 10 Default kubelet check items Check Item
Function
Description
Insufficient PID resources
PIDPressure
Check whether PIDs are sufficient.
- Interval: 10 seconds
- Threshold: 90%
- Defect: In community version 1.23.1 and earlier versions, this check item becomes invalid when over 65535 PIDs are used. For details, see issue 107107. In community version 1.24 and earlier versions, thread-max is not considered in this check item.
Insufficient memory
MemoryPressure
Check whether the allocable memory for the containers is sufficient.
- Interval: 10 seconds
- Threshold: max. 100 MiB
- Allocable = Total memory of a node – Reserved memory of a node
- Defect: This check item checks only the memory consumed by containers, and does not consider that consumed by other elements on the node.
Insufficient disk resources
DiskPressure
Check the disk usage and inodes usage of the kubelet and Docker disks.
- Interval: 10 seconds
- Threshold: 90%
Node-problem-controller Fault Isolation
Fault isolation is supported only by add-ons of 1.16.0 and later versions.
By default, if multiple nodes become faulty, NPC adds taints to up to 10% of the nodes. You can set npc.maxTaintedNode to increase the threshold.
The open source NPD plugin provides fault detection but not fault isolation. CCE enhances the node-problem-controller (NPC) based on the open source NPD. This component is implemented based on the Kubernetes node controller. For faults reported by NPD, NPC automatically adds taints to nodes for node fault isolation.
Parameter |
Description |
Default |
---|---|---|
npc.enable |
Whether to enable NPC This parameter is not supported in 1.18.0 or later versions. |
true |
npc.maxTaintedNode |
The maximum number of nodes that NPC can add taints to when a single fault occurs on multiple nodes for minimizing impact. The value can be in int or percentage format. |
10% Value range:
|
npc.nodeAffinity |
Node affinity of the controller |
N/A |
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot