Managing a Node Pool
Notes and Constraints
The default node pool DefaultPool does not support the following management operations.
Configuring Kubernetes Parameters
CCE allows you to highly customize Kubernetes parameter settings on core components in a cluster. For more information, see kubelet.
This function is supported only in clusters of v1.15 and later. It is not displayed for clusters earlier than v1.15.
- Log in to the CCE console.
- Click the cluster name to open its details page, choose Nodes on the left, and click the Node Pools tab on the right.
- Choose More > Manage next to the node pool name.
- On the Manage Component page on the right, change the values of the following Kubernetes parameters:
Table 1 Kubernetes parameters Component
Parameter
Description
Default Value
Remarks
docker
native-umask
`--exec-opt native.umask
normal
Cannot be changed.
docker-base-size
`--storage-opts dm.basesize
10G
Cannot be changed.
insecure-registry
Address of an insecure image registry
false
Cannot be changed.
limitcore
Limit on the number of cores
5368709120
-
default-ulimit-nofile
Limit on the number of handles in a container
{soft}:{hard}
-
kube-proxy
conntrack-min
sysctl -w net.nf_conntrack_max
131072
The values can be modified during the node pool lifecycle.
conntrack-tcp-timeout-close-wait
sysctl -w net.netfilter.nf_conntrack_tcp_timeout_close_wait
1h0m0s
kubelet
cpu-manager-policy
`--cpu-manager-policy
none
The values can be modified during the node pool lifecycle.
kube-api-qps
Query per second (QPS) to use while talking with kube-apiserver.
100
kube-api-burst
Burst to use while talking with kube-apiserver.
100
max-pods
Maximum number of pods managed by kubelet.
110
pod-pids-limit
PID limit in Kubernetes
-1
with-local-dns
Whether to use the local IP address as the ClusterDNS of the node.
false
allowed-unsafe-sysctls
Insecure system configuration allowed.
Starting from v1.17.17, CCE enables pod security policies for kube-apiserver. You need to add corresponding configurations to allowedUnsafeSysctls of a pod security policy to make the policy take effect. (This configuration is not required for clusters earlier than v1.17.17.) For details, see Example of Enabling Unsafe Sysctls in Pod Security Policy.
[]
kubelet
over-subscription-resource
Whether to enable node oversubscription.
This parameter is visible only to node pools of the BMS type. If this parameter is set to true, node oversubscription is enabled. For details about node oversubscription, see Hybrid Deployment of Online and Offline Jobs.
-
-
eni
nic-multiqueue
Number of NIC buffer queues
4
NIC buffer is supported only for BMSs in the sharing resource pool.
nic-threshold
High and low threshold for NIC buffers
Default: 0.3:0.6
yangtse-agent
security_groups_for_nodepool
Security group to which the pod in the node pool belongs
-
This parameter is available only for CCE Turbo clusters.
containerd
(available only for node pools that use containerd)
devmapper-base-size
Available data space of a single container
10G
Cannot be changed.
limitcore
Limit on the number of cores
5368709120
-
default-ulimit-nofile
Limit on the number of handles in a container
1048576
-
- Click OK.
Editing a Node Pool
- Log in to the CCE console.
- Click the cluster name to open its details page, choose Nodes on the left, and click the Node Pools tab on the right.
- Click Edit next to the name of the node pool you will edit. In the Edit Node Pool dialog box, edit the following parameters:
Table 2 Node pool parameters Parameter
Description
Name
Name of the node pool.
Nodes
Modify the number of nodes based on service requirements.
Autoscaler
By default, autoscaler is disabled.
After you enable autoscaler by clicking
, nodes in the node pool are automatically created or deleted based on service requirements.- Maximum Nodes and Minimum Nodes: You can set the maximum and minimum number of nodes to ensure that the number of nodes to be scaled is within a proper range.
- Priority: A larger value indicates a higher priority. For example, if this parameter is set to 1 and 4 respectively for node pools A and B, B has a higher priority than A, and auto scaling is first triggered for B. If the priorities of multiple node pools are set to the same value, for example, 2, the node pools are not prioritized and the system performs scaling based on the minimum resource waste principle.
- Scaling-In Cooling Interval: Set this parameter in the unit of minute. This field indicates the period during which the nodes added in the current node pool cannot be scaled in.
If the Autoscaler field is set to on, install the autoscaler add-on to use the autoscaler feature.
Kubernetes Label
Click Add to set the key-value pair attached to the Kubernetes objects (such as pods). A maximum of 10 labels can be added.
Labels can be used to distinguish nodes. With workload affinity settings, container pods can be scheduled to a specified node. For more information, see Labels and Selectors.
Resource Tag
You can add resource tags to classify resources.
You can create predefined tags in Tag Management Service (TMS). Predefined tags are visible to all service resources that support the tagging function. You can use these tags to improve tagging and resource migration efficiency. For details, see Creating Predefined Tags.
CCE will automatically create the "CCE-Dynamic-Provisioning-Node=node id" tag.
Taints
This field is left blank by default. You can add taints to set anti-affinity for the node. A maximum of 10 taints are allowed for each node. Each taint contains the following parameters:- Key: A key must contain 1 to 63 characters starting with a letter or digit. Only letters, digits, hyphens (-), underscores (_), and periods (.) are allowed. A DNS subdomain name can be used as the prefix of a key.
- Value: A value must start with a letter or digit and can contain a maximum of 63 characters, including letters, digits, hyphens (-), underscores (_), and periods (.).
- Effect: Available options are NoSchedule, PreferNoSchedule, and NoExecute.
For details, see Managing Node Taints.
- Click OK.
In the node pool list, the node pool status becomes Scaling. After the status changes to Completed, the node pool parameters are modified successfully. The modified configuration will be synchronized to all nodes in the node pool.
Deleting a Node Pool
Deleting a node pool will delete nodes in the pool. Pods on these nodes will be automatically migrated to available nodes in other node pools. If pods in the node pool have a specific node selector and none of the other nodes in the cluster satisfies the node selector, the pods will become unschedulable.
- Log in to the CCE console.
- Click the cluster name to open its details page, choose Nodes on the left, and click the Node Pools tab on the right.
- Choose More > Delete next to a node pool name to delete the node pool.
- Read the precautions in the Delete Node Pool dialog box.
- Enter DELETE in the text box and click Yes to confirm that you want to continue the deletion.
Copying a Node Pool
You can copy the configuration of an existing node pool to create a new node pool on the CCE console.
- Log in to the CCE console.
- Click the cluster name to open its details page, choose Nodes on the left, and click the Node Pools tab on the right.
- Choose More > Copy next to a node pool name to copy the node pool.
- The configuration of the selected node pool is replicated to the Create Node Pool page. You can edit the configuration as required and click Next: Confirm.
- On the Confirm page, confirm the node pool configuration and click Create Now. Then, a new node pool is created based on the edited configuration.
Last Article: Creating a Node Pool
Next Article: Workloads
Did this article solve your problem?
Thank you for your score!Your feedback would help us improve the website.