Help Center> Cloud Container Engine> FAQ> Node> Node Creation> How Do I Troubleshoot Problems Occurred When Accepting Nodes into a CCE Cluster?

How Do I Troubleshoot Problems Occurred When Accepting Nodes into a CCE Cluster?

Overview

This section describes how to troubleshoot the problems occurred when you accept or add existing ECSs to a CCE cluster.

For details about how to accept existing nodes, see Accepting Existing Nodes into a Cluster.

  • While an ECS is being accepted into a cluster, the operating system of the ECS will be reset to the standard OS image provided by CCE to ensure node stability. The CCE console prompts you to select the operating system and the login mode during the reset.
  • The ECS system and data disks will be formatted while the ECS is being accepted into a cluster. Ensure that data in the disks has been backed up.
  • While an ECS is being accepted into a cluster, do not perform any operation on the ECS through the ECS console.

Constraints and Limitations

  • Only VM nodes (including GPU-accelerated nodes) in the same region can be accepted and managed in CCE clusters. BMS nodes are not supported.
  • The cluster version must be v1.13 or later.
  • Heterogeneous nodes, such as Kunpeng, Ascend-accelerated, and HECS nodes, cannot be accepted and managed.
  • If IPv6 is enabled for a cluster, only nodes in a subnet with IPv6 enabled can be accepted and managed. If IPv6 is not enabled for the cluster, only nodes in a subnet without IPv6 enabled can be accepted.
  • If the password or key has been set when a VM node is created, the VM node can be accepted into a cluster 10 minutes after it is available.

Prerequisites

An ECS that meets the following conditions can be accepted:

  • The node has been purchased and is in the running state, and is not used by other clusters.
  • The node to be accepted and the CCE cluster are in the same VPC. (If the cluster version is earlier than v1.13.10, the node to be accepted and the CCE cluster must be in the same subnet.)
  • Only one data disk is attached to the node. The data disk capacity is greater than or equal to 100 GB. If the node has more than one data disk, detach other data disks first.
  • The node has 2-core or higher CPU, 4 GB or larger memory, and only one NIC.

Procedure

View the cluster log information to locate the failure cause and rectify the fault.

  1. Log in to the CCE console. In the navigation pane, choose Resource Management > Clusters. Click Task Details above the cluster list to view detailed information.

    Figure 1 Node management failure - task details

  2. In the Task Status window, locate the target node, and click View Details in the Operation column.

    Figure 2 Viewing the task details

  3. In the View Details dialog box displayed, click View Actions.

    Figure 3 Viewing the failure cause

  4. Rectify the fault based on the failure cause. You can also copy the error information and submit a service ticket to seek technical support.