CCE Node Problem Detector
Introduction
The CCE Node Problem Detector add-on (formerly NPD) monitors abnormal events of cluster nodes and can connect to a third-party monitoring platform. It is a daemon running on each node. It collects node issues from different daemons and reports them to the API server. It can run as a DaemonSet workload or separately.
The CCE Node Problem Detector add-on is developed based on the open-source project node-problem-detector. For details, see node-problem-detector.
Notes and Constraints
- When using CCE Node Problem Detector, do not format or partition node disks.
- Each CCE Node Problem Detector process occupies 30m CPUs and 100 MiB of memory.
- If the CCE Node Problem Detector version is 1.18.45 or later, the EulerOS version of the host machine must be 2.5 or later.
Permissions
To monitor kernel logs, the CCE Node Problem Detector add-on needs to read the host /dev/kmsg. Therefore, the privileged containers must be enabled. For details, see privileged.
In addition, CCE mitigates risks according to the least privilege principle. Only the following privileges are available for CCE Node Problem Detector running:
- cap_dac_read_search: permission to access /run/log/journal.
- cap_sys_admin: permission to access /dev/kmsg.
Installing the Add-on
- Log in to the CCE console and click the cluster name to access the cluster console.
- In the navigation pane, choose Add-ons. In the right pane, find the CCE Node Problem Detector add-on and click Install.
- In the Install Add-on sliding window, configure the specifications as needed.
You can adjust the number of add-on pods and resource quotas as required. High availability is not possible with a single pod. If an error occurs on the node where the add-on pod runs, the add-on will fail.
- Configure the add-on parameters.
Maximum Number of Isolated Nodes in a Fault: specifies the maximum number of nodes that can be isolated to prevent avalanches in case of a fault occurring on multiple nodes. You can configure this parameter either by percentage or quantity.
- Configure deployment policies for the add-on pods.
- Scheduling policies do not take effect on the DaemonSet pods of the add-on.
- When configuring multi-AZ deployment or node affinity, ensure that there are nodes meeting the scheduling policy and that resources are sufficient in the cluster. Otherwise, the add-on cannot run.
Table 1 Configurations for add-on scheduling Parameter
Description
Multi-AZ Deployment
- Preferred: Deployment pods of the add-on will be preferentially scheduled to nodes in different AZs. If all the nodes in the cluster are deployed in the same AZ, the pods will be scheduled to different nodes in that AZ.
- Equivalent mode: Deployment pods of the add-on are evenly scheduled to the nodes in the cluster in each AZ. If a new AZ is added, you are advised to increase add-on pods for cross-AZ HA deployment. With the Equivalent multi-AZ deployment, the difference between the number of add-on pods in different AZs will be less than or equal to 1. If resources in one of the AZs are insufficient, pods cannot be scheduled to that AZ.
- Forcible: Deployment pods of the add-on are forcibly scheduled to nodes in different AZs. There can be at most one pod in each AZ. If nodes in a cluster are not in different AZs, some add-on pods cannot run properly. If a node is faulty, the add-on pods on it may fail to be migrated.
Node Affinity
- Not configured: Node affinity is disabled for the add-on.
- Specify node: Specify the nodes where the add-on is deployed. If you do not specify the nodes, the add-on pods will be randomly scheduled based on the default cluster scheduling policy.
- Specify node pool: Specify the node pool where the add-on pods are deployed. If you do not specify the node pools, the add-on pods will be randomly scheduled based on the default cluster scheduling policy.
- Customize affinity: Enter the labels of the nodes where the add-on pods are to be deployed for more flexible scheduling policies. If you do not specify node labels, the add-on pods will be randomly scheduled based on the default cluster scheduling policy.
If multiple custom affinity policies are configured, ensure that there are nodes that meet all the affinity policies in the cluster. Otherwise, the add-on cannot run.
Toleration
Using both taints and tolerations allows (not forcibly) the add-on Deployment to be scheduled to a node with the matching taints, and controls the Deployment eviction policies after the node where the Deployment is located is tainted.
The add-on adds the default toleration policy for the node.kubernetes.io/not-ready and node.kubernetes.io/unreachable taints, respectively. The tolerance time window is 60s.
For details, see Configuring Tolerance Policies.
- Click Install.
Components
|
Component |
Description |
Resource Type |
|---|---|---|
|
node-problem-controller |
Isolate faults basically based on fault detection results. |
Deployment |
|
node-problem-detector |
Detect node faults. |
DaemonSet |
CCE Node Problem Detector Check Items
Check items are supported only in the add-on 1.16.0 and later versions.
Check items cover events and statuses.
- Event-related
For event-related check items, when a problem occurs, CCE Node Problem Detector reports an event to the API server. The event type can be Normal (normal event) or Warning (abnormal event).
Table 3 Event-related check items Check Item
Function
Description
OOMKilling
Listen to the kernel logs and check whether there are any OOM events. If there is an OOM event, the component will report it.
Typical scenario: The memory used by the process in the container exceeds the limit, triggering OOM and terminating the process.
Warning event
Listening object: /dev/kmsg
Matching rule: "Killed process \\d+ (.+) total-vm:\\d+kB, anon-rss:\\d+kB, file-rss:\\d+kB.*"
TaskHung
Listen to the kernel logs and check whether there are any taskHung events. If there is a taskHung event, the component will report it.
Typical scenario: Disk I/O suspension causes process suspension.
Warning event
Listening object: /dev/kmsg
Matching rule: "task \\S+:\\w+ blocked for more than \\w+ seconds\\."
ReadonlyFilesystem
Listen to the kernel logs and check whether there is a Remount root filesystem read-only error in the system kernel.
Typical scenario: A user detaches a data disk from a node by mistake on the ECS, and applications continuously write data to the mount point of the data disk. As a result, an I/O error occurs in the kernel and the disk is remounted as a read-only disk.
NOTE:If a node's rootfs uses Device Mapper and the data disk is detached from the node, the thin pool will malfunction. This will affect CCE Node Problem Detector, and the add-on will not be able to detect node faults.
Warning event
Listening object: /dev/kmsg
Matching rule: Remounting filesystem read-only
- Status-related
For status-related check items, when a problem occurs, CCE Node Problem Detector reports an event to the API server and changes the node status synchronously. This function can be used together with Node-problem-controller fault isolation to isolate nodes.
If the check period is not specified in the following check items, the default period is 30 seconds.
Table 5 Checking system metrics Check Item
Function
Description
Conntrack table full
ConntrackFullProblem
Check whether the conntrack table is full.
- Default threshold: 90%
- Usage: nf_conntrack_count
- Maximum value: nf_conntrack_max
Insufficient disk resources
DiskProblem
Check the usage of the system disk and CCE data disks (including the CRI logical disk and kubelet logical disk) on nodes.
- Default threshold: 90%
- Source:
df -h
Currently, additional data disks are not supported.
Insufficient file handles
FDProblem
Check if the FD file handles are used up.
- Default threshold: 90%
- Usage: the first value in /proc/sys/fs/file-nr
- Maximum value: the third value in /proc/sys/fs/file-nr
Insufficient node memory
MemoryProblem
Check whether memory is used up.
- Default threshold: 80%
- Usage: MemTotal-MemAvailable in /proc/meminfo
- Maximum value: MemTotal in /proc/meminfo
Insufficient process resources
PIDProblem
Check whether PID process resources are exhausted.
- Default threshold: 90%
- Usage: denominator of the fourth value in /proc/loadavg, which indicates the total number of processes that can run
- Maximum value: smaller value between /proc/sys/kernel/pid_max and /proc/sys/kernel/threads-max.
Table 7 Other check items Check Item
Function
Description
Abnormal NTP
NTPProblem
Check whether the node clock synchronization service ntpd or chronyd is running properly and whether there is a system time drift.
Default clock offset threshold: 8000 ms
Process D error
ProcessD
Check whether there is any process in the D state on nodes.
Default threshold: 10 abnormal processes detected for three consecutive times
Source:
- /proc/{PID}/stat
- Alternately, you can run the ps aux command.
Exceptional scenario: The ProcessD check item ignores the resident D processes (heartbeat and update) on which the SDI drivers on BMS nodes depend.
Process Z error
ProcessZ
Check whether there is any process in the Z state on nodes.
RDMA network interface error
Check the RDMA network interface status.
NOTE:CCE Node Problem Detector 1.19.37 and later versions support the RDMA network interface error detection function and mark the RDMAProblem status on the node. It automatically writes this status to the node object. If the add-on is rolled back to an earlier version that does not support this function, CCE Node Problem Detector cannot clear the status. As a result, the marked status is retained.
Default threshold: one RDMA network interface error detected for one consecutive time
Source:
- Command: rdma link show
ResolvConf error
ResolvConfFileProblem
Check whether the ResolvConf file is lost.
Check whether the ResolvConf file is normal.
Definition: No upstream domain name resolution server (nameserver) is included.
Check object: /etc/resolv.conf
Existing scheduled event
ScheduledEvent
Check whether there is any live migration event on nodes. A live migration event is usually triggered by a hardware fault and is an automatic fault rectification method at the IaaS layer.
Typical scenario: The host is faulty. For example, the fan is damaged or the disk has bad sectors. As a result, a live migration is triggered for VMs.
Source:
- http://169.254.169.254/meta-data/latest/events/scheduled
This check item is an Alpha feature and is disabled by default.
The spot price node is being reclaimed.
SpotPriceNodeReclaimNotification
Check whether any spot price node is interrupted and reclaimed due to preemption.
Default check interval: 120 seconds
Default fault handling policy: Evict some workloads on the nodes.
The kubelet component has the following default check items, which have bugs or defects. You can fix them by upgrading the cluster or using CCE Node Problem Detector.
Table 8 Default kubelet check items Check Item
Function
Description
Insufficient PIDs
PIDPressure
Check whether PIDs are sufficient.
- Interval: 10 seconds
- Threshold: 90%
- Defect: In community version 1.23.1 and earlier, this check item becomes invalid when over 65,535 PIDs are used. For details, see issue 107107. In community version 1.24 and earlier, thread-max is not considered in this check item.
Insufficient memory
MemoryPressure
Check whether the allocable memory for the containers is sufficient.
- Interval: 10 seconds
- Threshold: Maximum value – 100 MiB
- Allocable = Total memory on a node – Reserved memory on a node
- Defect: This check item checks only the allocatable memory of containers and does not check that on the node.
Insufficient disk space
DiskPressure
Check the disk usage and inode usage of the kubelet and Docker disks.
- Interval: 10 seconds
- Threshold: 90%
Node-problem-controller Fault Isolation
Fault isolation is supported only by CCE Node Problem Detector of 1.16.0 and later.
By default, if multiple nodes become faulty, node-problem-controller (NPC) adds taints to up to 10% of the nodes. You can set npc.maxTaintedNode to increase the threshold.
The open-source NPD provides fault detection but not fault isolation. CCE enhances the NPC based on the open-source NPD. NPC is based on the Kubernetes node controller. For faults reported by NPD, NPC automatically adds taints to the nodes with faults to isolate them.
|
Parameter |
Description |
Default Value |
|---|---|---|
|
npc.enable |
Whether to enable NPC This parameter is not supported in 1.18.0 or later versions. |
true |
|
npc.maxTaintedNode |
How many nodes that NPC can add taints to when multiple nodes have the same fault. This can minimize the impact. The value can be in int or percentage format. |
10% Value range:
|
|
npc.nodeAffinity |
Node affinity of the controller |
N/A |
Viewing CCE Node Problem Detector Events
Events reported by the CCE Node Problem Detector add-on can be queried on the Nodes tab.
- Log in to the CCE console and click the cluster name to access the cluster console.
- In the navigation pane, choose Nodes. In the right pane, click the Nodes tab, locate the row containing the target node, and click View Events in the Operation column.
Figure 1 Viewing node events
Configuring CCE Node Problem Detector Metric Alarms
For CCE Node Problem Detector status-related check items, you can configure alarm rules to notify you of exceptions by SMS message or email. For details about how to create a custom alarm rule, see Configuring Alarms in Alarm Center.
To use CCE Node Problem Detector check items to configure alarm rules, you need to install the Cloud Native Cluster Monitoring add-on in the cluster and interconnect the add-on with an AOM instance.
Collecting Prometheus Metrics
The DaemonSet pods of CCE Node Problem Detector expose Prometheus metrics over port 19901. By default, the add-on pods are added with the annotation metrics.alpha.kubernetes.io/custom-endpoints: '[{"api":"prometheus","path":"/metrics","port":"19901","names":""}]'. You can build a Prometheus collector to identify and obtain CCE Node Problem Detector metrics from http://{{CCE-Node-Problem-Detector-pod-IP-address}}:{{CCE-Node-Problem-Detector-pod-port}}/metrics.
If the CCE Node Problem Detector add-on version is earlier than 1.16.5, the exposed port of Prometheus metrics is 20257.
The metric data includes problem_counter and problem_gauge, as shown below.
# HELP problem_counter Number of times a specific type of problem has occurred.
# TYPE problem_counter counter
problem_counter{reason="DockerHung"} 0
problem_counter{reason="DockerStart"} 0
problem_counter{reason="EmptyDirVolumeGroupStatusError"} 0
...
# HELP problem_gauge Whether a specific type of problem is affecting the node or not.
# TYPE problem_gauge gauge
problem_gauge{reason="CNIIsDown",type="CNIProblem"} 0
problem_gauge{reason="CNIIsUp",type="CNIProblem"} 0
problem_gauge{reason="CRIIsDown",type="CRIProblem"} 0
problem_gauge{reason="CRIIsUp",type="CRIProblem"} 0
..
Helpful Links
After installing the CCE Node Problem Detector add-on, you can customize the node fault detection policies, such as filtering node detection scope or adjusting the fault threshold. For details, see Configuring Node Fault Detection Policies.
Release History
|
Add-on Version |
Supported Cluster Version |
New Feature |
Community Version |
|---|---|---|---|
|
1.19.52 |
v1.28 v1.29 v1.30 v1.31 v1.32 v1.33 v1.34 |
Fixed some issues. |
|
|
1.19.39 |
v1.28 v1.29 v1.30 v1.31 v1.32 v1.33 v1.34 |
CCE clusters v1.34 are supported. |
|
|
1.19.37 |
v1.27 v1.28 v1.29 v1.30 v1.31 v1.32 v1.33 |
Supported RDMA network interface status detection. |
|
|
1.19.33 |
v1.27 v1.28 v1.29 v1.30 v1.31 v1.32 v1.33 |
Fixed some issues. |
|
|
1.19.29 |
v1.27 v1.28 v1.29 v1.30 v1.31 v1.32 v1.33 |
CCE clusters v1.33 are supported. |
|
|
1.19.25 |
v1.25 v1.27 v1.28 v1.29 v1.30 v1.31 v1.32 |
CCE clusters v1.32 are supported. |
|
|
1.19.20 |
v1.25 v1.27 v1.28 v1.29 v1.30 v1.31 |
Fixed some issues. |
|
|
1.19.16 |
v1.21 v1.23 v1.25 v1.27 v1.28 v1.29 v1.30 v1.31 |
CCE clusters v1.31 are supported. |
|
|
1.19.11 |
v1.21 v1.23 v1.25 v1.27 v1.28 v1.29 v1.30 |
Fixed some issues. |
|
|
1.19.8 |
v1.21 v1.23 v1.25 v1.27 v1.28 v1.29 v1.30 |
|
|
|
1.19.1 |
v1.21 v1.23 v1.25 v1.27 v1.28 v1.29 |
Fixed some issues. |
|
|
1.19.0 |
v1.21 v1.23 v1.25 v1.27 v1.28 |
Fixed some issues. |
|
|
1.18.48 |
v1.21 v1.23 v1.25 v1.27 v1.28 |
Fixed some issues. |
|
|
1.18.46 |
v1.21 v1.23 v1.25 v1.27 v1.28 |
CCE clusters v1.28 are supported. |
|
|
1.18.22 |
v1.19 v1.21 v1.23 v1.25 v1.27 |
None |
|
|
1.18.14 |
v1.19 v1.21 v1.23 v1.25 |
|
|
|
1.18.10 |
v1.19 v1.21 v1.23 v1.25 |
|
|
|
1.17.4 |
v1.17 v1.19 v1.21 v1.23 v1.25 |
Optimized DiskHung check item. |
|
|
1.17.3 |
v1.17 v1.19 v1.21 v1.23 v1.25 |
|
|
|
1.17.2 |
v1.17 v1.19 v1.21 v1.23 v1.25 |
|
|
|
1.16.4 |
v1.17 v1.19 v1.21 v1.23 |
|
|
|
1.16.3 |
v1.17 v1.19 v1.21 v1.23 |
Added the function of checking the ResolvConf configuration file. |
|
|
1.16.1 |
v1.17 v1.19 v1.21 v1.23 |
|
|
|
1.15.0 |
v1.17 v1.19 v1.21 v1.23 |
|
|
|
1.14.11 |
v1.17 v1.19 v1.21 |
CCE clusters v1.21 are supported. |
|
|
1.14.5 |
v1.17 v1.19 |
Fixed the issue where monitoring metrics cannot be obtained. |
|
|
1.14.4 |
v1.17 v1.19 |
|
|
|
1.14.2 |
v1.17 v1.19 |
|
|
|
1.13.8 |
v1.15.11 v1.17 |
|
|
|
1.13.6 |
v1.15.11 v1.17 |
Fixed the issue where zombie processes are not reclaimed. |
|
|
1.13.5 |
v1.15.11 v1.17 |
Added taints and tolerations. |
|
|
1.13.2 |
v1.15.11 v1.17 |
Added resource limits and enhanced the detection capability of the CNI add-on. |
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot