Updated on 2024-01-04 GMT+08:00

Node Restrictions

Check Items

Check the following items:

  • Check whether the node is available.
  • Check whether the node OS supports the upgrade.
  • Check whether the node is marked with unexpected node pool labels.
  • Check whether the Kubernetes node name is the same as the ECS name.

Solution

  1. The node is unavailable. Preferentially recover the node.

    If a node is unavailable, log in to the CCE console and click the cluster name to access the cluster console. Then, choose Nodes in the navigation pane and click the Nodes tab. Ensure that the node is in the Running state. A node in the Installing or Deleting state cannot be upgraded.

    If a node is unavailable, recover the node and retry the check task.

  2. The container engine of the node does not support the upgrade.

    This issue typically occurs when a cluster of an earlier version is upgraded to v1.27 or later. Clusters of v1.27 or later support only the containerd runtime. If your node runtime is not containerd, the upgrade cannot be performed. In this case, reset the node and change the node runtime to containerd.

  3. The node OS does not support the upgrade.

    The following table lists the node OSs that support the upgrade. You can reset the node OS to an available OS in the list.

    Table 1 OSs that support the upgrade

    OS

    Constraint

    EulerOS 2.x

    None

    CentOS 7.x

    None

    Ubuntu

    If the check result shows that the upgrade is not supported due to regional restrictions, contact technical support.

    NOTE:

    If the target version is v1.27 or later, only Ubuntu 22.04 supports the upgrade.

  4. The affected node belongs to the default node pool but it is configured with a non-default node pool label, which will affect the upgrade.

    If a node is migrated from a node pool to the default node pool, the node pool label cce.cloud.com/cce-nodepool is retained, affecting the cluster upgrade. Check whether load scheduling on the node depends on the label.

    • If no, delete the label.
    • If yes, modify the load balancing policy, remove the dependency, and then delete the label.
  5. The node is marked with a CNIProblem taint. Preferentially recover the node.

    The node contains a taint whose key is node.cloudprovider.kubernetes.io/cni-problem, and the effect is NoSchedule. The taint is added by the NPD add-on. Upgrade the NPD add-on to the latest version and check again. If the problem persists, contact technical support.

  6. The Kubernetes node corresponding to the affected node does not exist.

    It is possible that the node is being deleted. Check again later.