Configuring a Node Pool
Constraints
The default node pool DefaultPool does not support the following management operations.
Configuration Management
CCE allows you to highly customize Kubernetes parameter settings on core components in a cluster. For more information, see kubelet.
This function is supported only in clusters of v1.15 and later. It is not displayed for clusters earlier than v1.15.
- Log in to the CCE console.
- Click the cluster name to access the cluster console. Choose Nodes in the navigation pane and click the Node Pools tab on the right.
- Click Manage in the Operation column of the target node pool
- On the Manage Components page on the right, change the values of the following Kubernetes parameters:
Table 1 kubelet Parameter
Description
Default Value
Modification
Remarks
cpu-manager-policy
CPU management policy configuration. For details, see CPU Scheduling.
- none: disables pods from exclusively occupying CPUs. Select this value if you want a large pool of shareable CPU cores.
- static: enables pods to exclusively occupy CPUs. Select this value if your workload is sensitive to latency in CPU cache and scheduling.
none
None
None
kube-api-qps
Query per second (QPS) for communicating with kube-apiserver.
100
None
None
kube-api-burst
Burst to use while talking with kube-apiserver.
100
None
None
max-pods
Maximum number of pods managed by kubelet.
- For a CCE standard cluster, the maximum number of pods is determined based on the maximum number of pods on a node.
- For a CCE Turbo cluster, the maximum number of pods is determined based on the number of NICs on a CCE Turbo cluster node.
None
None
pod-pids-limit
Limited number of PIDs in Kubernetes
-1
None
None
with-local-dns
Whether to use the local IP address as the ClusterDNS of the node.
false
None
None
event-qps
QPS limit for event creation
5
None
None
allowed-unsafe-sysctls
Insecure system configuration allowed.
Starting from v1.17.17, CCE enables pod security policies for kube-apiserver. Add corresponding configurations to allowedUnsafeSysctls of a pod security policy to make the policy take effect. (This configuration is not required for clusters earlier than v1.17.17.) For details, see Example of Enabling Unsafe Sysctls in Pod Security Policy.
[]
None
None
over-subscription-resource
Whether to enable node oversubscription.
If this parameter is set to true, node oversubscription is enabled.
false
None
None
colocation
Whether to enable hybrid deployment on nodes.
If this parameter is set to true, hybrid deployment is enabled on nodes.
false
None
None
kube-reserved-mem
system-reserved-mem
Reserved node memory.
Depends on node specifications. For details, see Node Resource Reservation Policy.
None
The sum of kube-reserved-mem and system-reserved-mem is less than half of the memory.
topology-manager-policy
Set the topology management policy.
Valid values are as follows:
- restricted: kubelet accepts only pods that achieve optimal NUMA alignment on the requested resources.
- best-effort: kubelet preferentially selects pods that implement NUMA alignment on CPU and device resources.
- none (default): The topology management policy is disabled.
- single-numa-node: kubelet allows only pods that are aligned to the same NUMA node in terms of CPU and device resources.
none
None
NOTICE:Modifying topology-manager-policy and topology-manager-scope will restart kubelet, and the resource allocation of pods will be recalculated based on the modified policy. In this case, running pods may restart or even fail to receive any resources.
topology-manager-scope
Set the resource alignment granularity of the topology management policy. Valid values are as follows:
- container (default)
- pod
container
resolv-conf
DNS resolution configuration file specified by the container
Null
None
None
runtime-request-timeout
Timeout interval of all runtime requests except long-running requests (pull, logs, exec, and attach).
2m0s
None
This parameter is available only in clusters v1.21.10-r0, v1.23.8-r0, v1.25.3-r0 and later versions.
registry-pull-qps
Maximum number of image pulls per second.
5
The value ranges from 1 to 50.
This parameter is available only in clusters v1.21.10-r0, v1.23.8-r0, v1.25.3-r0 and later versions.
registry-burst
Maximum number of burst image pulls.
10
The value ranges from 1 to 100 and must be greater than or equal to the value of registry-pull-qps.
This parameter is available only in clusters v1.21.10-r0, v1.23.8-r0, v1.25.3-r0 and later versions.
serialize-image-pulls
When this function is enabled, kubelet is notified to pull only one image at a time.
true
None
This parameter is available only in clusters v1.21.10-r0, v1.23.8-r0, v1.25.3-r0 and later versions.
evictionHard: memory.available
A hard eviction signal. The threshold is memory.available.
The value is fixed at 100 MiB.
None
For details, see Node-pressure Eviction.
NOTICE:Exercise caution when modifying the eviction threshold configuration. Improper configuration may cause pods to be frequently evicted or fail to be evicted when the node is overloaded.
nodefs and imagefs correspond to the file system partitions used by kubelet and container engines, respectively.
evictionHard: nodefs.available
A hard eviction signal. The threshold is nodefs.available.
10%
The value ranges from 1% to 99%.
evictionHard: nodefs.inodesFree
A hard eviction signal. The threshold is nodefs.inodesFree.
5%
The value ranges from 1% to 99%.
evictionHard: imagefs.available
A hard eviction signal. The threshold is imagefs.available.
10%
The value ranges from 1% to 99%.
evictionHard: imagefs.inodesFree
A hard eviction signal. The threshold is imagefs.inodesFree.
This parameter is left blank by default.
The value ranges from 1% to 99%.
evictionHard: pid.available
A hard eviction signal. The threshold is pid.available.
10%
The value ranges from 1% to 99%.
evictionSoft: memory.available
A soft eviction signal. The threshold is memory.available.
This parameter is left blank by default.
The value ranges from 100 MiB to 1,000,000 MiB. Configure evictionSoftGracePeriod of the corresponding eviction signal to configure the eviction grace period. This value must be greater than the threshold of the corresponding hard eviction signal.
evictionSoft: nodefs.available
A soft eviction signal. The threshold is nodefs.available.
This parameter is left blank by default.
The value ranges from 1% to 99%. Configure evictionSoftGracePeriod of the corresponding eviction signal to configure the eviction grace period. This value must be greater than the threshold of the corresponding hard eviction signal.
evictionSoft: nodefs.inodesFree
A soft eviction signal. The threshold is nodefs.inodesFree.
This parameter is left blank by default.
The value ranges from 1% to 99%. Configure evictionSoftGracePeriod of the corresponding eviction signal to configure the eviction grace period. This value must be greater than the threshold of the corresponding hard eviction signal.
evictionSoft: imagefs.available
A soft eviction signal. The threshold is imagefs.available.
This parameter is left blank by default.
The value ranges from 1% to 99%. Configure evictionSoftGracePeriod of the corresponding eviction signal to configure the eviction grace period. This value must be greater than the threshold of the corresponding hard eviction signal.
evictionSoft: imagefs.inodesFree
A soft eviction signal. The threshold is imagefs.inodesFree.
This parameter is left blank by default.
The value ranges from 1% to 99%. Configure evictionSoftGracePeriod of the corresponding eviction signal to configure the eviction grace period. This value must be greater than the threshold of the corresponding hard eviction signal.
evictionSoft: pid.available
A soft eviction signal. The threshold is pid.available.
This parameter is left blank by default.
The value ranges from 1% to 99%. Configure evictionSoftGracePeriod of the corresponding eviction signal to configure the eviction grace period. This value must be greater than the threshold of the corresponding hard eviction signal.
Table 2 kube-proxy Parameter
Description
Default Value
Modification
conntrack-min
Maximum number of connection tracking entries
To obtain the value, run the following command:
sysctl -w net.nf_conntrack_max
131072
None
conntrack-tcp-timeout-close-wait
Wait time of a closed TCP connection
To obtain the value, run the following command:
sysctl -w net.netfilter.nf_conntrack_tcp_timeout_close_wait
1h0m0s
None
Table 3 Network components (available only for CCE Turbo clusters) Parameter
Description
Default Value
Modification
nic-minimum-target
Minimum number of ENIs bound to the nodes in the node pool
10
None
nic-maximum-target
Maximum number of ENIs pre-bound to a node at the node pool level
0
None
nic-warm-target
Number of ENIs pre-bound to a node at the node pool level
2
None
nic-max-above-warm-target
Reclaim number of ENIs pre-bound to a node at the node pool level
2
None
Table 4 Pod security group in a node pool (available only for CCE Turbo clusters) Parameter
Description
Default Value
Modification
security_groups_for_nodepool
- Default security group used by pods in a node pool. You can enter the security group ID. If this parameter is not set, the default security group of the cluster container network is used. A maximum of five security group IDs can be specified at the same time, separated by semicolons (;).
- The priority of the security group is lower than that of the security group configured for Security Groups.
None
None
Table 5 Docker (available only for node pools that use Docker) Parameter
Description
Default Value
Modification
native-umask
`--exec-opt native.umask
normal
Cannot be changed.
docker-base-size
`--storage-opts dm.basesize
0
Cannot be changed.
insecure-registry
Address of an insecure image registry
false
Cannot be changed.
limitcore
Maximum size of a core file in a container. The unit is byte.
If not specified, the value is infinity.
5368709120
None
default-ulimit-nofile
Limit on the number of handles in a container
{soft}:{hard}
The value cannot exceed the value of the kernel parameter nr_open and cannot be a negative number.
You can run the following command to obtain the kernel parameter nr_open:
sysctl -a | grep nr_open
image-pull-progress-timeout
If the image fails to be pulled before time outs, the image pull will be canceled.
1m0s
This parameter is supported in v1.25.3-r0 and later.
Table 6 containerd (available only for node pools that use containerd) Parameter
Description
Default Value
Modification
devmapper-base-size
Available data space of a single container
0
Cannot be changed.
limitcore
Maximum size of a core file in a container. The unit is byte.
If not specified, the value is infinity.
5368709120
None
default-ulimit-nofile
Limit on the number of handles in a container
1048576
The value cannot exceed the value of the kernel parameter nr_open and cannot be a negative number.
You can run the following command to obtain the kernel parameter nr_open:
sysctl -a | grep nr_open
image-pull-progress-timeout
If the image fails to be pulled before time outs, the image pull will be canceled.
1m0s
This parameter is supported in v1.25.3-r0 and later.
- Click OK.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot