Updating a Node Pool
Notes and Constraints
- Only clusters of v1.19 or later support the modification of the container engine, OS, system/data disk size, data disk space allocation, and pre-installation/post-installation script configuration.
- The modification of container engine, pre-installation and post-installation scripts, or OS of a node pool takes effect only on new nodes. To synchronize the modification onto existing nodes, manually reset the existing nodes.
- The modification of data disk space allocation and the system/data disk size of a node pool takes effect only for new nodes. The configuration cannot be synchronized even if the existing nodes are reset.
- Changes to Kubernetes labels/taints in a node pool will be automatically synchronized to existing nodes after the options of Synchronization for Existing Nodes are selected. You do not need to reset these nodes.
Updating a Node Pool
- Log in to the CCE console.
- Click the cluster name to access the cluster console. Choose Nodes in the navigation pane and click the Node Pools tab on the right.
- Click Update next to the name of the node pool you will edit. Configure the parameters in the displayed Update Node Pool page.
Basic Settings
Table 1 Basic settings Parameter
Description
Node Pool Name
Name of the node pool.
ConfigurationsTable 2 Node configuration parameters Parameter
Description
Container Engine
The container engines supported by CCE include Docker and containerd, which may vary depending on cluster types, cluster versions, and OSs. Select a container engine based on the information displayed on the CCE console. For details, see Mapping between Node OSs and Container Engines.
NOTE:After the container engine is modified, the modification automatically takes effect on newly added nodes. For existing nodes, manually reset the nodes for the modification to take effect.
OS
Select an OS type. Different types of nodes support different OSs.- Public image: Select a public image for the node.
- Private image: Select a private image for the node.
NOTE:- Service container runtimes share the kernel and underlying calls of nodes. To ensure compatibility, select a Linux distribution version that is the same as or close to that of the final service container image for the node OS.
- After the OS is modified, the modification automatically takes effect on newly added nodes. Manually reset existing nodes for the modification to take effect.
Storage SettingsTable 3 Configuration parameters Parameter
Description
System Disk
System disk used by the node OS. The disk size ranges from 40 GiB to 1024 GiB. The default value is 50 GiB.
NOTE:After the system disk configuration is modified, the modification takes effect only on newly added nodes. The configuration cannot be synchronized to existing nodes even if they are reset.
Data Disk
At least one data disk is required for the container runtime and kubelet. The data disk cannot be deleted or uninstalled. Otherwise, the node will be unavailable.
- First data disk: used for container runtime and kubelet components. The disk size ranges from 20 GiB to 32768 GiB. The default value is 100 GiB.
- Other data disks: You can set the data disk size to a value ranging from 10 GiB to 32768 GiB. The default value is 100 GiB.
NOTE:After the data disk configuration is modified, the modification takes effect only on newly added nodes. The configuration cannot be synchronized to existing nodes even if they are reset.
Advanced Settings
Expand the area and configure the following parameters:
- Data Disk Space Allocation: allocates space for container engines, images, and ephemeral storage for them to run properly. For details about how to allocate data disk space, see Data Disk Space Allocation.
NOTE:
After the data disk space allocation configuration is modified, the modification takes effect only for new nodes. The configuration cannot take effect for the existing nodes even if they are reset.
Adding data disks
A maximum of four data disks can be added. By default, raw disks are created without any processing. You can also click Expand and select any of the following options:
- Default: By default, a raw disk is created without any processing.
- Mount Disk: The data disk is attached to a specified directory.
- Use as PV: applicable when there is a high performance requirement on PVs. The node.kubernetes.io/local-storage-persistent label is added to the node with PV configured. The value is linear or striped.
- Use as ephemeral volume: applicable when there is a high performance requirement on EmptyDir.
Local Disk Description
If the node flavor is disk-intensive or ultra-high I/O, one data disk can be a local disk.
Local disks may break down and do not ensure data reliability. Store your service data in EVS disks, which are more reliable than local disks.
Advanced SettingsTable 4 Advanced settings Parameter
Description
Kubernetes Label
A key-value pair added to a Kubernetes object (such as a pod). After specifying a label, click Add. A maximum of 20 labels can be added.
Labels can be used to distinguish nodes. With workload affinity settings, container pods can be scheduled to a specified node. For more information, see Labels and Selectors.
NOTE:Modified Kubernetes labels automatically take effect on new nodes as well as existing nodes if Kubernetes labels is selected in Synchronization for Existing Nodes.
Taint
This field is left blank by default. You can add taints to configure node anti-affinity. A maximum of 20 taints are allowed for each node. Each taint contains the following parameters:- Key: A key must contain 1 to 63 characters, starting with a letter or digit. Only letters, digits, hyphens (-), underscores (_), and periods (.) are allowed. A DNS subdomain name can be used as the prefix of a key.
- Value: A value must start with a letter or digit and can contain a maximum of 63 characters, including letters, digits, hyphens (-), underscores (_), and periods (.).
- Effect: Available options are NoSchedule, PreferNoSchedule, and NoExecute.
For details, see Managing Node Taints.
NOTE:Modified taints automatically take effect on new nodes as well as existing nodes if Taints is selected in Synchronization for Existing Nodes.
Synchronization for Existing Nodes
After the options are selected, changes to Kubernetes labels/taints in a node pool will be synchronized to existing nodes in the node pool.
NOTE:When you update a node pool, pay attention to the following if you change the state of Kubernetes labels or Taints:
- When these options are deselected, the Kubernetes labels/taints of the existing and new nodes in the node pool may be inconsistent. If service scheduling relies on node labels or taints, the scheduling may fail or the node pool may fail to scale.
- When these options are selected:
- If you have modified or added labels or taints in the node pool, the modifications will be automatically synchronized to existing nodes typically in 10 minutes after Kubernetes labels or Taints is selected.
- If you have deleted a label or taint in the node pool, you must manually delete the label or taint on the node list page after Kubernetes labels or Taints is selected.
- If you have manually changed the key or effect of a taint on an existing node, a new taint will be added to the existing node after Kubernetes labels or Taints is selected. In the new taint, its key is different from the manually changed key but its value and effect are the same as those manually changed ones, or its effect is different from the manually changed effect but its key and value are the same as those manually changed ones. This is because a Kubernetes taint natively uses a key and effect as a key-value pair. The taints with different keys or effects are considered as two taints.
New Node Scheduling
Default scheduling policy for the nodes newly added to a node pool. If you select Unschedulable, newly created nodes in the node pool will be labeled as unschedulable. In this way, you can perform some operations on the nodes before pods are scheduled to these nodes.
Scheduled Scheduling: After scheduled scheduling is enabled, new nodes will be automatically scheduled after the custom time expires.
- Disabled: By default, scheduled scheduling is not enabled for new nodes. To manually enable this function, go to the node list. For details, see Configuring a Node Scheduling Policy in One-Click Mode.
- Custom: the default timeout for unschedulable nodes. The value ranges from 0 to 99 in the unit of minutes.
NOTE:- If auto scaling of node pools is also required, ensure the scheduled scheduling is less than 15 minutes. If a node added through Autoscaler cannot be scheduled for more than 15 minutes, Autoscaler determines that the scale-out failed and triggers another scale-out. Additionally, if the node cannot be scheduled for more than 20 minutes, the node will be scaled in by Autoscaler.
- After this function is enabled, nodes will be tainted with node.cloudprovider.kubernetes.io/uninitialized during a node pool creation or update.
Edit key pair
Only node pools that use key pairs for login support key pair editing. You can select another key pair.
NOTE:The edited key pair automatically takes effect on newly added nodes. For existing nodes, manually reset the nodes for the modification to take effect.
Pre-installation Command
Pre-installation script command, in which Chinese characters are not allowed. The script command will be Base64-transcoded.
The script will be executed before Kubernetes software is installed. Note that if the script is incorrect, Kubernetes software may fail to be installed.
NOTE:The modified pre-installation command automatically takes effect on newly added nodes. For existing nodes, manually reset the nodes for the modification to take effect.
Post-installation Command
Pre-installation script command, in which Chinese characters are not allowed. The script command will be Base64-transcoded.
The script will be executed after Kubernetes software is installed, which does not affect the installation.
NOTE:The modified post-installation command automatically takes effect on newly added nodes. For existing nodes, manually reset the nodes for the modification to take effect.
- After the configuration, click OK.
After the node pool parameters are updated, go to the Nodes page to check whether the node to which the node pool belongs is updated. You can reset the node to synchronize the configuration updates for the node pool.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot