Overview

Introduction

A container cluster consists of a set of worker machines, called nodes, that run containerized applications. A node can be a virtual machine (VM) or a physical machine (PM), depending on your service requirements. The components on a node include kubelet, container runtime, and kube-proxy.

A Kubernetes cluster consists of master nodes and node nodes. The nodes described in this section refer to worker nodes, the computing nodes of a cluster that run containerized applications.

CCE uses high-performance Elastic Cloud Servers (ECSs) or Bare Metal Servers (BMSs) as nodes to build highly available Kubernetes clusters.

Notes

  • To ensure node stability, a certain amount of CCE node resources will be reserved for Kubernetes components (such as kubelet, kube-proxy, and docker) based on the node specifications. Therefore, the total number of node resources and the amount of allocatable node resources for your cluster are different. The larger the node specifications, the more the containers deployed on the node. Therefore, more node resources need to be reserved to run Kubernetes components.
  • The node networking (such as the VM networking and container networking) is taken over by CCE. You are not allowed to add NICs or change routes. If you modify the networking configuration, the availability of CCE may be affected.
  • If you want to modify the specifications of a purchased node, stop the node and perform the operations described in General Operations for Modifying Specifications. You can also purchase a new node and delete the old one.

Node Lifecycle

A lifecycle indicates the node statuses recorded from the time when the node is created through the time when the node is deleted or released.

Table 1 Node statuses

Status

Status Attribute

Description

Available

Stable state

The node is running properly and is connected to the cluster.

Nodes in this state can provide services.

Unavailable

Stable state

The node is not running properly.

A node in this state cannot provide services. Contact the administrator or perform the operations described in Resetting a Node.

Creating

Intermediate state

The node has been created but is not running.

Installing

Intermediate state

The Kubernetes software is being installed on the node.

Deleting

Intermediate state

The node is being deleted.

If node stays stably in the Deleting state, an exception occurs. In this case, contact the administrator to handle the exception.

Stopped

Stable state

The node is stopped properly.

A node in this state cannot provide services. You can start the node on the ECS console.

Error

Stable state

The node is abnormal.

A node in this state cannot provide services. Contact the administrator or perform the operations described in Resetting a Node.

Supported Node Specifications

The node flavors supported by CCE clusters and CCE Turbo clusters are as follows:

  • CCE cluster
    • x86 nodes: ai1, ct3, t6, s2, s3, s6, c3, ir3, cx3, c3ne, cx3ne, c6, c6s, m2, m3, m6, h3, d2, hc2, i3, p1, pi1, pi2, p2v, p2vs, g5, g5r, g6, Si2, Si3, and sn3 servers with 2 vCPUs and 4 GB memory or higher specifications. For details, see ECS Specifications.
    • Kunpeng nodes: All types of Kunpeng nodes are supported. For details about the specifications, see ECS Specifications.

    You need to enter the specific flavor name, for example, c3ne.large.2.

    In addition, IPv6 dual-stack nodes support only s3, c3, c3ne, sn3, and cx3ne servers, and the available specifications vary depending on the region. For details, see Constraints.

  • CCE Turbo cluster

    c6ne and c7 servers. c6 and v7 ECSs can be deployed in the same resource pool with bare-metal servers (BMSs).

Mapping between Node OSs and Container Engines

Table 2 Node OSs and container engines in CCE clusters

OS

Kernel Version

Container Engine

Container Storage Rootfs

Container Runtime

CentOS 7.x

3.x

Docker

Clusters of v1.19 and earlier use Device Mapper.

Clusters of v1.21 and later use OverlayFS.

runC

EulerOS 2.5

Device Mapper

EulerOS 2.3

EulerOS 2.9

4.x

OverlayFS

Ubuntu 18.04

Table 3 Node OSs and container engines in CCE Turbo clusters

Node Type

OS

Kernel Version

Container Engine

Container Storage Rootfs

Container Runtime

VM

CentOS 7.6

3.x

Docker

OverlayFS

runC

Ubuntu 18.04

4.x

EulerOS 2.9

4.x

BMS in the shared resource pool

EulerOS 2.9

4.x

Containerd

Device Mapper

Kata

Table 4 Node OSs and container engines in CCE Kunpeng clusters

OS

Kernel Version

Container Engine

Container Storage Rootfs

Container Runtime

EulerOS 2.8

4.x

Docker

OverlayFS

runC

Secure Containers and Common Containers

Secure (Kata) containers are distinguished from common containers in a few aspects.

The most significant difference is that each secure container (pod) runs on an independent micro-VM, has an independent OS kernel, and is securely isolated at the virtualization layer. CCE provides container isolation that is more secure than independent private Kubernetes clusters. With isolated OS kernels, computing resources, and networks, pod resources and data will not be preempted and stolen by other pods.

You can run common or secure containers on a single node in a CCE Turbo cluster. The differences between them are as follows:

Category

Secure Container (Kata)

Common Container (Docker)

Common Container (containerd)

Node type used to run containers

Bare-metal server (BMS)

VM

VM

Container engine

containerd

Docker

Containerd

Container runtime

Kata

runc

runc

Container kernel

Exclusive kernel

Sharing the kernel with the host

Sharing the kernel with the host

Container isolation

Lightweight VMs

cgroups and namespaces

cgroups and namespaces

Container engine storage driver

DeviceMapper

OverlayFS2

OverlayFS

Pod overhead

Memory: 50 MiB

CPU: 0.1 cores

Pod overhead is a feature for accounting for the resources consumed by the pod infrastructure on top of the container requests and limits. For example, if limits.cpu is set to 0.5 cores and limits.memory to 256 MiB for a pod, the pod will request 0.6-core CPUs and 306 MiB of memory.

None

None

Minimal specifications

Memory: 256 MiB

CPU: 0.25 cores

None

None

Container engine CLI

crictl

docker

crictl

Pod computing resources

The request and limit values must be the same for both CPU and memory.

The request and limit values can be different for both CPU and memory.

The request and limit values can be different for both CPU and memory.

Host network

Not supported

Supported

Supported

For details about container and Docker, see How Do I Select a Container Runtime.