Updated on 2024-11-11 GMT+08:00

Managing a Node Pool

Notes and Constraints

The default node pool DefaultPool does not support the following management operations.

Configuring Kubernetes Parameters

CCE allows you to highly customize Kubernetes parameter settings on core components in a cluster. For more information, see kubelet.

This function is supported only in clusters of v1.15 and later. It is not displayed for clusters earlier than v1.15.

  1. Log in to the CCE console. In the navigation pane, choose Resource Management > Node Pools.
  2. In the upper right corner of the displayed page, select a cluster to filter node pools by cluster.
  3. Click Configuration next to the node pool name.

    Figure 1 Node pool configuration

  4. On the Configuration page on the right, change the values of the following Kubernetes parameters:

    Table 1 Kubernetes parameters

    Component

    Parameter

    Description

    Default Value

    Remarks

    docker

    native-umask

    `--exec-opt native.umask

    normal

    Cannot be changed.

    docker-base-size

    `--storage-opts dm.basesize

    10G

    Cannot be changed.

    insecure-registry

    Address of an insecure image registry

    false

    Cannot be changed.

    limitcore

    Limit on the number of cores

    5368709120

    -

    default-ulimit-nofile

    Limit on the number of handles in a container

    {soft}:{hard}

    -

    kube-proxy

    conntrack-min

    sysctl -w net.nf_conntrack_max

    131072

    The values can be modified during the node pool lifecycle.

    conntrack-tcp-timeout-close-wait

    sysctl -w net.netfilter.nf_conntrack_tcp_timeout_close_wait

    1h0m0s

    kubelet

    cpu-manager-policy

    `--cpu-manager-policy

    none

    The values can be modified during the node pool lifecycle.

    kube-api-qps

    Query per second (QPS) to use while talking with kube-apiserver.

    100

    kube-api-burst

    Burst to use while talking with kube-apiserver.

    100

    max-pods

    Maximum number of pods managed by kubelet.

    110

    pod-pids-limit

    PID limit in Kubernetes

    -1

    with-local-dns

    Whether to use the local IP address as the ClusterDNS of the node.

    false

    allowed-unsafe-sysctls

    Insecure system configuration allowed.

    Starting from v1.17.17, CCE enables pod security policies for kube-apiserver. You need to add corresponding configurations to allowedUnsafeSysctls of a pod security policy to make the policy take effect. (This configuration is not required for clusters earlier than v1.17.17.) For details, see Example of Enabling Unsafe Sysctls in Pod Security Policy.

    []

  5. Click OK.

Editing a Node Pool

  1. Log in to the CCE console. In the navigation pane, choose Resource Management > Node Pools.
  2. In the upper right corner of the displayed page, select a cluster to filter node pools by cluster.
  3. Click Edit next to the name of the node pool you will edit. In the Edit Node Pool dialog box, edit the following parameters:

    Table 2 Node pool parameters

    Parameter

    Description

    Name

    Name of the node pool.

    Nodes

    Modify the number of nodes based on service requirements.

    Autoscaler

    By default, autoscaler is disabled.

    After you enable autoscaler by clicking , nodes in the node pool are automatically created or deleted based on service requirements.

    • Maximum Nodes and Minimum Nodes: You can set the maximum and minimum number of nodes to ensure that the number of nodes to be scaled is within a proper range.
    • Priority: A larger value indicates a higher priority. For example, if this parameter is set to 1 and 4 respectively for node pools A and B, B has a higher priority than A, and auto scaling is first triggered for B. If the priorities of multiple node pools are set to the same value, for example, 2, the node pools are not prioritized and the system performs scaling based on the minimum resource waste principle.
    • Scaling-In Cooling Interval: Set this parameter in the unit of minute. This field indicates the period during which the nodes added in the current node pool cannot be scaled in.

    If the Autoscaler field is set to on, install the autoscaler add-on to use the autoscaler feature.

    Taints

    • This field is left blank by default. Taints allow nodes to repel a set of pods. You can add a maximum of 10 taints for each node pool. Each taint contains the following parameters:
      • Key: A key must contain 1 to 63 characters starting with a letter or digit. Only letters, digits, hyphens (-), underscores (_), and periods (.) are allowed. A DNS subdomain name can be used as the prefix of a key.
      • Value: A value must start with a letter or digit and can contain a maximum of 63 characters, including letters, digits, hyphens (-), underscores (_), and periods (.).
      • Effect: Available options are NoSchedule, PreferNoSchedule, and NoExecute.
      NOTICE:

      If taints are used, you must configure tolerations in the YAML files of pods. Otherwise, scale-up may fail or pods cannot be scheduled onto the added nodes.

    K8S Labels

    K8S labels are key/value pairs that are attached to objects, such as pods. Labels are used to specify identifying attributes of objects that are meaningful and relevant to users, but do not directly imply semantics to the core system. For more information, see Labels and Selectors.

    Resource Tags

    It is recommended that you use TMS's predefined tag function to add the same tag to different cloud resources.

    Predefined tags are visible to all service resources that support the tagging function. You can use predefined tags to improve tag creation and migration efficiency.

    Tag changes do not affect the node.

  4. After the configuration is complete, click Save.

    In the node pool list, the node pool status becomes Scaling. After the status changes to Completed, the node pool parameters are modified successfully. The modified configuration will be synchronized to all nodes in the node pool.

Deleting a Node Pool

Deleting a node pool will delete nodes in the pool. Pods on these nodes will be automatically migrated to available nodes in other node pools. If pods in the node pool have a specific node selector and none of the other nodes in the cluster satisfies the node selector, the pods will become unschedulable.

  1. Log in to the CCE console. In the navigation pane, choose Resource Management > Node Pools.
  2. In the upper right corner of the displayed page, select a cluster to filter node pools by cluster.
  3. Choose More > Delete next to a node pool name to delete the node pool.
  4. Read the precautions in the Delete Node Pool dialog box.
  5. Enter DELETE in the text box and click Yes to confirm that you want to continue the deletion.

Copying a Node Pool

You can copy the configuration of an existing node pool to create a new node pool on the CCE console.

  1. Log in to the CCE console. In the navigation pane, choose Resource Management > Node Pools.
  2. In the upper right corner of the displayed page, select a cluster to filter node pools by cluster.
  3. Choose More > Copy next to a node pool name to copy the node pool.
  4. The configuration of the selected node pool is replicated to the Create Node Pool page. You can edit the configuration as required and click Next: Confirm.
  5. On the Confirm page, confirm the node pool configuration and click Create Now. Then, a new node pool is created based on the edited configuration.

Migrating a Node

Nodes in a node pool can be migrated. Currently, nodes in a node pool can be migrated only to the default node pool (defaultpool) in the same cluster.

  1. Log in to the CCE console. In the navigation pane, choose Resource Management > Node Pools.
  2. In the upper right corner of the displayed page, select a cluster to filter node pools by cluster.
  3. Click More > Migrate next to the name of the node pool.
  4. In the dialog box displayed, select the destination node pool and the node to be migrated.

    After node migration, original resource tags, Kubernetes labels, and taints will be retained, and new Kubernetes labels and taints from the destination node pool will be added.

  5. Click OK.

Clustering by Specifications

When a large number of node pools are created, searching for node pools becomes difficulty. CCE provides the clustering function to aggregate node pools of the same specifications to facilitate searching.

As shown in the following figure, there are four node pools. The specifications of the upper two node pools are the same, and the specifications of the lower two node pools are the same.

Click Cluster by specifications to aggregate node pools of the same specifications, as shown in the following figure. After you click a node pool, the node pools with the same specifications are displayed on the right.