Updated on 2025-10-30 GMT+08:00

Functions

CCE delivers highly scalable, high-performance, enterprise-grade Kubernetes clusters. It offers a one-stop container platform that integrates cluster management, node and node pool management, workload management, container networks, container storage, auto scaling, application scheduling, cloud native observability, chart and add-on management, a chart marketplace, and permissions management.

Cluster Management

CCE is a fully managed Kubernetes service designed to streamline the deployment and management of containerized applications. It enables you to effortlessly create Kubernetes clusters, rapidly deploy applications, and efficiently operate and maintain your clusters.

  • CCE offers one-stop deployment and O&M for Kubernetes environments. With just a few clicks, you can create a container cluster without the need to manually configure Docker or Kubernetes. CCE automates the deployment and O&M of containerized applications.

  • CCE supports multiple types of container clusters. It allows you to choose different networks as needed and manage heterogeneous infrastructure like ECSs, BMSs, and GPU-accelerated ECSs through clusters.
    Table 1 Cluster types

    Type

    Introduction

    Available Regions

    CCE Turbo cluster

    CCE Turbo clusters run on the Cloud Native 2.0 infrastructure. They feature hardware and software synergy, zero network performance loss, high security and reliability, and intelligent scheduling. They provide you with one-stop cost-effective container services.

    The Cloud Native 2.0 networks are available for large-scale, high-performance scenarios. In CCE Turbo clusters, container IP addresses are assigned from VPC CIDR blocks, and containers and nodes can be in different subnets. External networks in the VPC can directly access container IP addresses for high performance.

    • Asia Pacific: CN North-Beijing4, CN North-Ulanqab1, CN North3, CN East-Shanghai1, CN East-Qingdao, CN East2, CN South-Guangzhou, CN South-Guangzhou-InvitationOnly, CN Southwest-Guiyang1, CN-Hong Kong, AP-Bangkok, AP-Singapore, AP-Jakarta, and AP-Manila
    • Middle East: ME-Riyadh
    • Africa: AF-Cairo and AF-Johannesburg
    • Türkiye: TR-Istanbul
    • Latin America: LA-Mexico City2, LA-Sao Paulo1, and LA-Santiago

    CCE standard cluster

    CCE standard clusters are for commercial use and support the standard features of open-source Kubernetes clusters.

    CCE standard clusters offer a simple, cost-effective, highly available solution. There is no need to manually manage and maintain control plane nodes. You can choose a container tunnel network or a VPC network depending on your service needs. CCE standard clusters are ideal for typical scenarios that do not have special performance or cluster scale requirements.

    All regions

  • Cluster management functions are listed in the table below.
    Table 2 Functions

    Function

    Description

    Operation Guide

    Creating a cluster

    You can easily create Kubernetes clusters on the CCE console, where you can customize parameters like the container and Service networks and choose add-ons to install during cluster setup. Once a cluster is created, the control plane nodes are fully managed and hosted by CCE.

    Buying a CCE Standard/Turbo Cluster

    Accessing a cluster

    You can access a CCE cluster using either kubectl or CloudShell. This enables you to manage the cluster and perform key operations such as deploying workloads and monitoring resource statuses.

    Cluster Access Overview

    Upgrading a cluster

    CCE launches multiple Kubernetes versions annually, each with a two-year maintenance cycle. To ensure optimal performance and security, you are advised to upgrade your clusters before the maintenance period expires. Timely upgrades help mitigate security and stability risks, enable support for new features and OSs, and prevent compatibility issues caused by large version gaps.

    Cluster Upgrade Overview

    Hibernating and waking up a cluster

    When a pay-per-use cluster is temporarily idle, you can hibernate it to reduce costs. During hibernation, control plane node resource costs are suspended, while storage and load balancing resources continue to be billed under the original billing mode.

    To resume usage, you can wake up the cluster, a process that typically takes 3 to 5 minutes.

    Hibernating or Waking Up a Pay-per-Use Cluster

    Deleting a cluster

    A pay-per-use cluster can be deleted directly, while a yearly/monthly cluster must be unsubscribed from or released first. You can delete nodes, workloads, and Services within the cluster as needed. Once a cluster is deleted, all associated services become unrecoverable. So, it is essential to back up or migrate any critical data before proceeding with deletion.

    Deleting a Cluster

Node Management

A container cluster consists of a set of worker machines, called nodes, that run containerized applications. A node can be a virtual machine (VM) or a physical machine (PM), depending on your service requirements. The components on a node include kubelet, container runtime, and kube-proxy. CCE runs high-performance ECS and BMS nodes for highly available Kubernetes clusters.

Table 3 Functions

Function

Description

Operation Guide

Creating a node

You can create nodes to ensure the stable operation of applications in CCE clusters. These nodes run on various types of infrastructure, including ECSs, BMSs, GPUs, and NPUs.

Creating a Node

Accepting an existing node into a cluster

You can add your purchased resources, such as ECSs and BMSs, to a CCE cluster for unified management. This approach enhances resource utilization through reuse and provides greater flexibility in managing diverse resources.

Accepting Nodes for Management

Logging in to a node

You can access a node in different ways and interact with it directly to perform operations such as node-level troubleshooting.

Logging In to a Node

Managing node labels

You can add, modify, or delete node labels based on service requirements, such as node roles or service modules. These labels enable precise workload scheduling, for example, deploying specific workloads to nodes with designated labels. This allows for refined classification and more flexible node management.

Managing Node Labels

Managing node taints

A node taint is defined by a key, a value, and an effect, following the format of key=value:effect. Taints are used to repel pods from being scheduled onto specific nodes. However, pods can tolerate taints by specifying matching tolerations in their configuration, allowing them to be scheduled onto nodes with corresponding taints when appropriate.

Managing Node Taints

Synchronizing the data of a cloud server

When the configuration or status of an ECS node changes, such as hardware parameters or network settings, on the ECS console, CCE allows you to manually trigger node information synchronization. This ensures that the node details displayed in the cluster remain consistent with the actual state of the underlying infrastructure.

Synchronizing the Data of Cloud Servers

Draining a node

Node drainage is designed to safely evict service pods from a node, ensuring smooth migration of workloads while ensuring service availability. It is commonly used during node maintenance, upgrades, or scale-in operations for graceful transitions and minimizing downtime.

Draining a Node

Resetting a node

If a node experiences a configuration issue or software fault that cannot be resolved through standard troubleshooting, you can reset the node to restore it to its initial state. This process can, for example, clear unnecessary configurations and restart essential components to help the node quickly return to a stable, available condition.

When a node in a CCE cluster is reset, services running on the node will also be deleted. Exercise caution when performing this operation.

Resetting a Node

Deleting a node

If a node is no longer required for cluster operations, such as in cases of service downsizing or hardware aging, you can delete it from the CCE cluster to release its resources.

However, deleting a node also removes any services running on it. It is important to proceed with caution and ensure that services are not impacted by this operation.

Deleting or Unsubscribing from a Node

Node Pool Management

A node pool is a logical group of nodes within a cluster to simplify and standardize node management. Node pools enable the implementation of service-specific scheduling rules and support key features such as batch node management, auto scaling, node migration, and configuration replication. You can manage and optimize resources through node pools. For example, you can set auto scaling policies for node pools to automatically adjust the number of nodes to meet service requirements. Node pools also allow you to configure Kubernetes parameters to meet your advanced requirements.

Table 4 Functions

Function

Description

Operation Guide

Creating a node pool

You can configure a node template using the parameters specified during the node pool creation to quickly create, manage, and delete nodes.

Creating a Node Pool

Modifying core component settings on a node

If the default node settings in a cluster do not meet specific service requirements, you can customize the core components, such as kubelet, kube-proxy, and the container engine, within a node pool.

Modifying Node Pool Configurations

Upgrading an OS

After CCE launches a new OS image, you can manually perform batch resets and upgrades on nodes within a node pool.

Upgrading an OS

Deleting a node pool

When a node pool is deleted, all nodes within it are removed first. Once the nodes are deleted, any workloads running on them are automatically migrated to available nodes in other node pools.

Deleting a Node Pool

Workload Management

A workload is an application running on Kubernetes. No matter how many components are there in your workload, you can run it in a group of Kubernetes pods. A workload is an abstract model of a group of pods in Kubernetes. Workloads in Kubernetes are classified as Deployments, StatefulSets, DaemonSets, jobs, and CronJobs.

CCE provides Kubernetes-native container deployment and management and supports lifecycle management of container workloads, including creation, configuration, monitoring, auto scaling, upgrade, uninstall, service discovery, and load balancing.

Table 5 Functions

Function

Description

Operation Guide

Creating a Deployment

A Deployment is a Kubernetes application that does not retain data or state while running. Each pod of the same Deployment is identical, allowing for seamless creation, deletion, and replacement without impacting the application functionality. Deployments are ideal for stateless applications, such as web frontend servers and microservices, which do not require data storage.

Creating a Deployment

Creating a StatefulSet

A StatefulSet is an application that needs to retain data or state while running. StatefulSets are ideal for stateful applications, such as databases, cache services, and message queues.

Creating a StatefulSet

Creating a DaemonSet

A DaemonSet is a type of Kubernetes workload that ensures a DaemonSet pod runs on all or selected nodes in a cluster. When a new node is added to the cluster, the DaemonSet controller automatically creates a DaemonSet pod on the node. Conversely, when a node is removed, its pods are deleted, including the DaemonSet pod.

Creating a DaemonSet

Creating a job

A job is a type of Kubernetes workload designed for managing batch processing tasks, handling one-time or short-lived tasks. Unlike long-running services managed by Deployments or StatefulSets, a job creates one or more pods, executing them with defined start and end phases. If the task is not completed, the job retries pod execution until the target is reached.

Creating a Job

Creating a CronJob

A CronJob is a type of Kubernetes workload designed to run periodic tasks, similar to crontab in Linux. CronJobs follow a Cron format. They periodically execute jobs at predefined schedules.

Creating a CronJob

Configuring container specifications

CCE allows you to set resource limits for added containers during workload creation. You can configure CPU and memory requests and limits for each workload pod. Additionally, you can specify GPU and NPU quotas for the needs of each workload pod.

Configuring Container Specifications

Configuring the container lifecycle

Container lifecycle hooks are core strategies provided by Kubernetes. They enable you to insert custom logic at key phases throughout the container lifecycle. These hooks provide refined process controls over containerized applications, enabling applications to better adapt to the dynamic characteristics of the cloud native environment.

Configuring the Container Lifecycle

Configuring health checks for a container

CCE can regularly check the health status of containers during container running. If health checks are not configured, a pod cannot detect application exceptions or automatically restart the application to recover it. As a result, the pod may be in the Running state, but the application is unavailable or abnormal.

Kubernetes provides three types of health check probes to monitor the applications in containers to ensure system stability and high availability.

Configuring Container Health Check

Configuring environment variables

Environment variables are configuration parameters dynamically transferred when a container is running. They allow you to flexibly adjust application behaviors and settings without rebuilding the image.

Configuring Environment Variables

Network Access

By deeply integrating the Kubernetes networking capabilities with VPC, CCE provides stable and high-performance networking for mutual access of workloads in complex scenarios.

Table 6 Functions

Function

Description

Operation Guide

Service

Services allow you to access a single or multiple containerized applications. Each Service has a fixed IP address and port during its lifecycle and targets one or more backend pods. In this way, frontend clients do not need to keep track of these pods, allowing pods to be added or reduced without worrying IP address changes.

CCE supports the following types of Services:

  • ClusterIP: The Service is only reachable from within the cluster.
  • NodePort: The Service is accessed using the private IP address or EIP of the node.
  • LoadBalancer: The Service is accessed using a load balancer.
  • DNAT: The Service is accessed using a DNAT gateway.

Service Overview

Ingress

An ingress is an independent resource in the Kubernetes cluster and defines rules for forwarding external access traffic. You can define forwarding rules based on domain names and paths for fine-grained distribution of access traffic.

Ingress Overview

Container Storage

Container storage is based on Kubernetes Container Storage Interface (CSI) and deeply integrated with Huawei Cloud storage services, such as EVS, SFS, and OBS. CCE container storage is fully compatible with native Kubernetes storage services, such as emptyDir, hostPath, secrets, and ConfigMaps.

Table 7 Functions

Function

Description

Operation Guide

EVS volume

CCE allows you to attach EVS disks to containers. By using EVS volumes, you can attach the remote file directory of a storage system into a container so that data in the data volume is permanently preserved. Even if the container is deleted, the data in the data volume is still stored in the storage system.

EVS Overview

SFS volume

CCE allows you to mount SFS volumes to a container path for persistent data storage. SFS volumes are typically used in ReadWriteMany scenarios for large-capacity expansion and cost-sensitive services, such as media processing, content management, big data analysis, and workload analysis.

SFS Overview

SFS Turbo volume

You can create SFS Turbo volumes and mount them to a container path. SFS Turbo file systems are fast, on-demand, and scalable. They are suitable for DevOps, containerized microservices, and enterprise office applications.

SFS Turbo Overview

OBS volume

OBS provides massive, secure, highly reliable, cost-effective data storage for you to store data of any type and size. You can use it in enterprise backup/archiving, video on demand (VoD), video surveillance, and many other scenarios. CCE allows you to create OBS volumes and mount them to a container path.

OBS Overview

DSS volume

DSS provides dedicated physical storage resources. With technologies like data redundancy and cache acceleration, DSS delivers highly reliable, durable, low-latency, stable storage resources. CCE allows you to mount DSS volumes to containers.

DSS Overview

Local PV

CCE allows you to use LVM to combine data volumes on nodes into a storage pool (VolumeGroup) and create LVs for containers to mount. A PV that uses a local persistent volume as the medium is considered local PV.

Local PV Overview

Ephemeral volume

CCE provides the following types of emptyDir volumes:

  • Ephemeral storage: the Kubernetes-native emptyDir type. Its lifecycle is the same as that of the pod. Memory can be specified as the storage medium. When the pod is deleted, the emptyDir volume is deleted, and the data is lost.
  • Local ephemeral volume: Local data disks on a node form a storage pool (VolumeGroup) through LVM. LVs are created as the storage medium of emptyDir and mounted to pods. LVs deliver better performance than the default storage medium of emptyDir.

emptyDir Overview

Auto Scaling

CCE allows you to scale your nodes and workloads both manually and automatically. Auto scaling policies can be flexibly combined to handle in-the-moment load spikes.

Table 8 Functions

Function

Description

Operation Guide

Workload scaling

Cluster-level auto scaling targets pods and dynamical adjusts their quantity or specifications based on workload demands. For example, the number of pods can be automatically increased during peak hours to handle more user requests and then scaled in during off-peak hours to reduce costs.

Workload Scaling Rules

Node scaling

Node scaling involves dynamically adding or reducing compute resources (such as ECSs or CCI instances) at the resource layer based on the scheduling status of pods. This approach ensures that clusters are well-resourced for high loads and minimizes waste during low demand.

Node Scaling Rules

Application Scheduling

CCE uses Volcano Scheduler to provide heterogeneous compute scheduling and job scheduling. It provides complete application scheduling features for machine learning, deep learning, bioinformatics, genomics, and other big data application scenarios.

CCE supports scheduling policies such as CPU resource scheduling, GPU/NPU heterogeneous resource scheduling, hybrid deployment of online and offline jobs, and CPU burst. You can set scheduling policies based on service characteristics to improve application performance and cluster resource utilization.

Table 9 Functions

Function

Description

Operation Guide

CPU scheduling

CCE provides CPU management policies that enable the allocation of complete physical CPU cores to applications. This improves application performance and reduces scheduling latency.

CPU Scheduling

GPU scheduling

CCE provides GPU scheduling for clusters, facilitating refined resource allocation and optimizing resource utilization. This accommodates the specific GPU compute needs of diverse workloads, thereby enhancing the overall scheduling efficiency and service performance of the clusters.

GPU Scheduling

NPU scheduling

CCE provides NPU scheduling for clusters, facilitating efficient processing of inference and image recognition tasks.

NPU Scheduling

Volcano scheduling

Volcano is a batch processing platform that runs on Kubernetes for machine learning, deep learning, bioinformatics, genomics, and other big data applications. It provides general-purpose, high-performance computing capabilities, such as job scheduling, heterogeneous chip management, and job running management.

Volcano Scheduling Overview

Cloud native hybrid deployment

The cloud native hybrid deployment solution focuses on the Volcano and Kubernetes ecosystems to help you improve resource utilization, reduce costs, and improve efficiency.

Cloud Native Hybrid Deployment Overview

Cloud Native Observability

CCE allows you to flexibly configure workload log policies to enable unified log collection, centralized management, and deep analysis. To prevent excessive log growth, CCE also performs periodic checks to manage logs.

For performance monitoring, CCE tracks key metrics of cluster nodes and workloads, including resource usage, running status, and network traffic. These metrics are presented through visualized dashboards that support multi-level drill-down queries and association analysis, helping you quickly identify and resolve faults.

Additionally, CCE supports automatic reporting of alarms and events. With preset alarm templates, you can enable real-time monitoring with a few clicks, allowing for timely detection of potential issues in clusters and containers. This proactive approach helps maintain service stability.

Table 10 Functions

Function

Description

Operation Guide

Health Center

Health diagnosis monitors cluster health by leveraging container O&M experts' experience to quickly detect cluster faults and identify risks. It also provides rectification suggestions.

Health Center Overview

Monitoring Center

Monitoring Center provides functions such as multi-dimensional data insights and dashboards. It provides monitoring views from dimensions such as clusters, nodes, workloads, and pods and supports multi-level drill-down and association analysis. Dashboard includes pre-built container monitoring dashboards for components and resources like Kubernetes API server, CoreDNS, and PVCs.

Monitoring Center Overview

Logging

CCE Logging uses the functions of LTS. CCE works with LTS to collect logs of control plane components (including kube-apiserver, kube-controller-manager, and kube-scheduler), Kubernetes audit logs, Kubernetes events, and container logs (including standard output logs, text file logs, and node logs).

Logging Overview

Alarm Center

Alarm Center works with AOM 2.0 to allow you to create alarm rules and check alarms of clusters and containers.

Alarm Center Overview

Chart Management

CCE provides unified resource management and scheduling based on Kubernetes Helm charts. This enables quick deployment and management of charts and significantly simplifies the installation and management of Kubernetes resources. Those who release applications can use Helm charts to package applications, manage dependencies and versions, and release applications to software repositories. By using Helm charts, you do not need to write complex application deployment files. You can easily search for, install, upgrade, roll back, and uninstall applications on Kubernetes.

Table 11 Functions

Function

Description

Operation Guide

Deploying an application from a chart

You can upload a Helm chart package, install it on the console, and manage the deployed releases.

Deploying an Application from a Chart

Add-on Management

CCE provides multiple types of add-ons to manage extended cluster functions. You can select add-ons as required to enhance the functions and flexibility of containerized applications.

These add-ons include CCE-developed and enhanced add-ons and widely used open-source add-ons.

  • CCE-developed and enhanced add-ons are deeply integrated into CCE and optimized for specific service requirements and scenarios. They can better support complex enterprise applications and ensure high performance and reliability.
  • Open-source add-ons leverage extensive community support and mature technologies to provide you with various functions and flexible solutions to meet ever-changing service requirements.

For details, see Overview.

Permissions Management

CCE permissions management allows you to assign permissions to IAM users and user groups under the tenant accounts. CCE combines the advantages of IAM and Kubernetes RBAC to provide a variety of authorization methods, including IAM fine-grained authorization, IAM token authorization, cluster-scoped authorization, and namespace-wide authorization for permissions management requirements in different scenarios.

Table 12 Functions

Function

Description

Operation Guide

Cluster-level permissions

CCE cluster-level permissions are managed through fine-grained authorization management built on IAM. With IAM, users or user groups with specific permissions can perform operations on cloud service resources related to CCE, such as creating or deleting clusters.

Granting Cluster Permissions to an IAM User

Namespace-level permissions

CCE uses Kubernetes RBAC to manage permissions across a cluster. With RBAC, different users or user groups with specific permissions can perform operations on various Kubernetes resources, such as pods, Services, and ConfigMaps, based on their assigned permissions.

Namespace Permissions (Kubernetes RBAC-based)