Node Pool Overview
Introduction
CCE introduces node pools to help you better manage nodes in Kubernetes clusters. A node pool contains one node or a group of nodes with identical configurations in a cluster.
You can create custom node pools on the CCE console. With node pools, you can quickly create, manage, and destroy nodes without affecting the cluster. All nodes in a custom node pool share the same type and configurations. You cannot configure a single node in a node pool. Any change applies to every node in the node pool.
You can also use node pools for auto scaling (supported only by pay-per-use node pools).
- When a pod in a cluster cannot be scheduled due to insufficient resources, scale-out can be automatically triggered.
- When there is an idle node or a monitoring metric threshold is met, scale-in can be automatically triggered.
This section describes how node pools work in CCE and how to create and manage node pools.
Node Pool Architecture
All nodes in a pool typically share:
- Node OS
- Node login mode
- Node container runtime
- Enterprise project
- Startup parameters of Kubernetes components on a node
- Custom startup script of a node
- Kubernetes labels and taints
CCE provides the following extended attributes for node pools:
- Node pool OS
- Maximum number of pods on each node in a node pool
Notes on a Node Pool Upgrade
For clusters of v1.21.11-r0, v1.23.9-r0, v1.25.4-r0, or later versions, new node pools support the creation of both pay-per-use and yearly/monthly nodes by default, providing you with better resource management experience.
The highlights of new node pools are as follows:
- Both pay-per-use and yearly/monthly nodes can be created in one node pool, and the required duration of yearly/monthly nodes can be different.
- Pay-per-use and yearly/monthly nodes can be accepted for management.
- Auto scaling can be enabled and various scaling policies can be configured for more efficient, flexible resource management.
The following table lists the changes after an existing node pool is upgraded.
|
Billing Mode of the Original Node Pool |
Changes After an Upgrade |
|---|---|
|
Pay-per-use |
All capabilities of the original node pool will be automatically inherited. Yearly/Monthly nodes created in the new node pool cannot be manually scaled in. They can only be deleted or unsubscribed from. |
|
Yearly/Monthly |
The original node pool can be hitlessly upgraded, without affecting the existing nodes in the node pool. After the upgrade, the following changes will occur:
|
Description of DefaultPool
DefaultPool is not a real node pool. It only classifies nodes that are not in the custom node pools. These nodes are directly created on the console or by calling APIs. DefaultPool does not support any user-created node pool functions, including scaling and parameter configuration. DefaultPool cannot be edited, deleted, expanded, or auto scaled, and nodes in it cannot be migrated.
Application Scenarios
When a large-scale cluster is required, you are advised to use node pools to manage nodes.
The following table describes multiple scenarios of large-scale cluster management and the functions of node pools in each scenario.
|
Scenario |
Function |
|---|---|
|
Multiple heterogeneous nodes (with different models and configurations) in the cluster |
Nodes can be grouped into different pools for management. |
|
Frequent node scaling required in a cluster |
Node pools support auto scaling to dynamically add or reduce nodes. |
|
Complex application scheduling rules in a cluster |
Node pool tags can be used to quickly specify service scheduling rules. |
Functions and Precautions
|
Function |
Description |
Precaution |
|---|---|---|
|
Creating a node pool |
Add a node pool. |
It is recommended that a cluster contains no more than 100 node pools. |
|
Deleting a node pool |
Deleting a node pool will delete nodes in the pool. Pods on these nodes will be automatically migrated to available nodes in other node pools. |
If pods in the node pool have a specific node selector and none of the other nodes in the cluster satisfies the node selector, the pods will become unschedulable. |
|
Enabling auto scaling for a node pool |
After auto scaling is enabled, nodes will be automatically created or deleted in the node pool based on the cluster loads. |
Do not store important data on nodes in a node pool because the nodes may be deleted after scale-in. Data on the deleted nodes cannot be restored. |
|
Disabling auto scaling for a node pool |
After auto scaling is disabled, the number of nodes in a node pool will not automatically change with the cluster loads. |
None |
|
Adjusting the size of a node pool |
The number of nodes in a node pool can be directly adjusted. If the number of nodes is reduced, nodes are randomly removed from the current node pool. |
After auto scaling is enabled, you are not advised to manually adjust the node pool size. |
|
Modifying node pool configurations |
You can change the node pool name and number of nodes, add or delete Kubernetes labels, resource tags, and taints, and adjust node pool configurations such as the disk, OS, and container engine of the node pool. |
The deleted or added Kubernetes labels and taints (as well as their quantity) will apply to all nodes in the node pool, which may cause pod re-scheduling. Therefore, exercise caution when performing this operation. |
|
Removing a node from a node pool |
Nodes in a node pool can be migrated to the default node pool of the same cluster. |
Nodes in the default node pool cannot be migrated to other node pools, and nodes in a user-created node pool cannot be migrated to other user-created node pools. |
|
Copying a node pool |
You can copy the configuration of an existing node pool to create a new node pool. |
None |
|
Setting Kubernetes parameters |
You can configure core components with fine granularity. |
|
Deploying a Workload in a Specified Node Pool
All nodes within a node pool carry the cce.cloud.com/cce-nodepool label. To ensure that a workload is scheduled onto nodes from a specific node pool, you can use the nodeSelector field in the workload settings. An example is as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
nodeSelector:
cce.cloud.com/cce-nodepool: "nodepool_name" # The label value is the node pool name.
containers:
- image: nginx:latest
imagePullPolicy: IfNotPresent
name: nginx
imagePullSecrets:
- name: default-secret
For more complex scheduling, you can define custom affinity rules, such as hard constraints, where scheduling occurs only if all specified conditions are met, and soft constraints, where scheduling may proceed even if some conditions are not met. For details, see Configuring Node Affinity Scheduling (nodeAffinity).
Additionally, you can specify resource requests for containers to ensure workloads are scheduled only on nodes that meet the required resource criteria. For details, see Resource Management for Pods and Containers. For example, if a workload pod requests four CPU cores, it will not be scheduled on a node that offers only two.
Helpful Links
You can log in to the CCE console and refer to the following sections to perform operations on node pools:
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot