Easily Switch Between Product Types

You can click the drop-down list box to switch between different product types.

Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
DataArts Fabric
IoT
IoT Device Access
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
Huawei Cloud Astro Canvas
Huawei Cloud Astro Zero
CodeArts Governance
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance (CCI)
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Cloud Transformation
Well-Architected Framework
Cloud Adoption Framework
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Blockchain
Blockchain Service
Web3 Node Engine Service
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive

Basic Concepts

Updated on 2025-07-09 GMT+08:00

CCE provides highly scalable, high-performance, and enterprise-class Kubernetes clusters. With CCE, you can easily deploy, manage, and scale containerized applications in the cloud.

CCE provides native Kubernetes APIs, supports kubectl, and provides a graphical console, enabling you to have a complete end-to-end experience. Before using it, familiarize yourself with some related basic concepts.

Cluster

A cluster is a combination of cloud resources, such as cloud servers (nodes) and load balancers, for running containers. In a cluster, one or more elastic cloud servers (also called nodes) are deployed in the same subnet to provide compute resource pools for containers.

CCE supports the cluster types shown in the table below.

Cluster Type

Description

CCE standard cluster

CCE standard clusters are for commercial use, which fully support the standard features of open source Kubernetes clusters.

CCE standard clusters offer a simple, cost-effective, highly available solution. There is no need to manually manage and maintain master nodes. You can choose between a container tunnel network or a VPC network depending on your service needs. CCE standard clusters are ideal for typical scenarios that do not need special performance or cluster scale requirements.

CCE Turbo cluster

CCE Turbo clusters run on the Cloud Native 2.0 infrastructure. They feature hardware and software synergy, zero network performance loss, high security and reliability, and intelligent scheduling. They provide you with one-stop cost-effective container services.

The Cloud Native 2.0 networks are good for large-scale, high-performance scenarios. In CCE Turbo clusters, container IP addresses are assigned from VPC CIDR blocks, and containers and nodes can be in different subnets. External networks in a VPC can be directly connected to container IP addresses for high performance.

CCE Autopilot cluster

CCE Autopilot allows you to create serverless clusters that offer optimized Kubernetes compatibility and free you from O&M.

CCE Autopilot clusters can be deployed without user nodes, simplifying the application deployment. There is no need to purchase, deploy, or manage nodes or maintain their security. You can just focus on the implementation of application service logic, which greatly reduces your O&M costs and improves the reliability and scalability of applications.

For details, see Buying a CCE Standard/Turbo Cluster.

Node

In a Kubernetes cluster, nodes run containerized applications. They can be physical servers or virtual machines (VMs) connected over networks. Each node has necessary components installed, such as a container runtime (Docker for example) and kubelet (used to manage containers). Pods, the smallest deployable units, are deployed and run on nodes, which are centrally scheduled and managed by Kubernetes. Nodes are the basic runtime environments in a cluster, ensuring high availability and scalability of applications.

For details, see Creating a Node.

Node Pool

In a Kubernetes cluster, a node pool is a group of nodes that have the same configuration and attributes. These nodes usually have the same hardware specifications, OS version, and configurations. A node pool makes it easier to manage and scale cluster resources in batches. You can create node pools of different sizes and configurations to meet different workload scheduling requirements and ensure efficient resource utilization. In addition, node pools support auto scaling. The number of nodes in a node pool can be scaled automatically based on workloads. This improves the resource utilization, flexibility, and scalability of a cluster.

For details, see Creating a Node Pool.

VPC

A VPC provides a secure, logically isolated virtual network environment. VPCs provide the same resources as physical networks, and they also provide various advanced network services, such as elastic IP addresses and security groups.

With VPCs, node networks and container networks in CCE clusters can be isolated. You can also configure EIPs and bandwidths for your clusters for more flexible scalability.

For details, see Creating a VPC with a Subnet.

Security Group

A security group is a collection of access rules for Elastic Cloud Servers (ECSs) that have the same security requirements and are mutually trusted in a VPC. After a security group is created, you can create different access rules to control who can access the ECSs that are added to this security group.

For details, see Adding a Security Group Rule.

Relationship Between Clusters, VPCs, Security Groups, and Nodes

As shown in Figure 1, a region may include multiple VPCs. A VPC consists of one or more subnets. The subnets communicate with each other through subnet gateways. A cluster is created in a subnet. There are the following scenarios:
  • Different clusters are created in different VPCs.
  • Different clusters are created in the same subnet.
  • Different clusters are created in different subnets.
Figure 1 Relationship between clusters, VPCs, security groups, and nodes

Pod

A pod is the smallest, basic unit for deploying applications or services. It can contain one or more containers, which typically share storage and network resources. Each pod has its own IP address, allowing the containers within the pod to communicate with each other and be accessed by other pods in the same cluster. Kubernetes also offers various policies to manage container execution. These policies include restart policies, resource requests and limits, and lifecycle hooks.

Figure 2 Pod

Container

A container is an instance created using a Docker image. Multiple containers can run on the same node (the host). Containers are essentially processes, but they run in their own separate namespaces, unlike actual processes, which run directly on a host machine. Namespaces provide isolation between containers, allowing each container to have its own file system, network API, process ID, and more. This enables OS-level isolation for containers.

Figure 3 Relationships between pods, containers, and nodes

Workload

A workload is an application running in a Kubernetes cluster. No matter how many components are there in your workload, you can run it in a group of pods. A workload is an abstract model of a group of pods. In Kubernetes, there are Deployments, StatefulSets, DaemonSets, jobs, and CronJobs.

  • Deployments support auto scaling and rolling upgrade. They are ideal for scenarios where pods are completely independent of each other and functionally identical. Typical examples include web applications like Nginx and blog platforms like WordPress.
  • StatefulSets allow for the organized deployment and removal of pods. Each pod in a StatefulSet has a unique identifier and can communicate with others. StatefulSets are ideal for applications that need persistent storage and communication between pods, like etcd, the distributed key-value store, or MySQL High Availability, the high-availability databases.
  • DaemonSets guarantee that all or specific nodes have a DaemonSet pod running and automatically deploy DaemonSet pods on newly added nodes in a cluster. They are ideal for services that need to run on every node, like log collection (Fluentd) and monitoring agent (Prometheus node exporter).
  • Jobs are one-off tasks that guarantee the successful completion of a specific number of pods. They are ideal for one-off tasks, like data backups and batch processing.
  • CronJobs run tasks on specified schedule. They are ideal for tasks that need to be done regularly, like data synchronization and report generation.

For details, see Creating a Workload.

Figure 4 Relationship between workloads and pods

Image

An image is a standard format used to package containerized applications and create containers. Essentially, an image is a specialized file system that includes all the necessary programs, libraries, resources, and configuration files for container runtimes. It also contains configuration parameters like anonymous volumes, environment variables, and users that are required for runtimes. An image does not contain any dynamic data. Once it has been created, the content does not change. When deploying containerized applications, you have the option to use images from Docker Hub, SoftWare Repository for Container (SWR), or your own private image registries. For instance, you can create an image that includes a specific application and all its dependencies, ensuring consistent execution across different environments.

The relationship between an image and a container is akin to that between a class and an instance in object-oriented programming. An image serves as a static blueprint, while a container is its active, running entity. Containers can be created, started, stopped, deleted, and suspended.

For details, see Pushing an Image.

Figure 5 Relationship between images, containers, and workloads

Namespace

A namespace in Kubernetes is a way to group and organize related resources and objects, such as pods, Services, and Deployments. It logically isolates data from other namespaces, but shares basic resources like CPUs, memory, and storage within the same cluster with them. By deploying different environments in separate namespaces, such as development, testing, and production, you can ensure environmental isolation and simplify management and maintenance tasks.

In Kubernetes, most resource objects, including pods, Services, ReplicationControllers, and Deployments, are associated with the default namespace by default. However, there are also cluster-level resources like nodes and PersistentVolumes (PVs) that are not tied to any specific namespace and provide services to resources across all namespaces.

For details, see Creating a Namespace.

Service

A Service is used to define access policies for pods. There are different types of Services with their respective values and behaviors:

  • ClusterIP: This is the default Service type. Each ClusterIP Service is assigned a unique IP address within the cluster. This IP address is only accessible within the cluster. It cannot be directly accessed from external networks. ClusterIP Services are typically used for internal communications within a cluster.
  • NodePort: A NodePort Service opens a static port (NodePort) on all nodes in a cluster. You can access the Service through this port. External systems can contact NodePort Services using the Elastic IPs (EIPs) associated with the nodes over the specified ports.
  • LoadBalancer: This type of Service allows you to use the load balancers provided by cloud service providers to expose Services to the Internet. Load balancers can distribute traffic to the NodePort and ClusterIP Services within the cluster.
  • DNAT: This type of Service translates IP addresses for cluster nodes and enables multiple nodes to share an EIP. Compared to directly binding an EIP to a node, DNAT enhances reliability. You do not need to bind an EIP to a single node and requests can still be distributed to the workload even any of the nodes inside is down.

For details, see Service Overview.

Ingress

An ingress controls how Services within a cluster can be accessed from outside the cluster. Ingresses can route traffic based on domain names and paths. They support load balancing, TLS termination, and SSL certificate management. An ingress manages traffic of multiple Services in a unified manner. It acts as an entry point for incoming traffic. This simplifies network configuration, improves cluster scalability and security and is an important way to expose Services in microservices.

For details, see Ingress Overview.

Network Policy

Network policies allow you to specify rules for traffic flow between pods. They control whether traffic is allowed or denied to and from a pod based on specified rules to enhance network security for clusters. Network policies allow you to define rules based on pod labels, IP addresses, and ports, limit inbound and outbound traffic, and prevent unauthorized requests, protecting the security of Services in a cluster.

For details, see Configuring Network Policies to Restrict Pod Access.

ConfigMap

ConfigMaps are used to store configuration data in key-value pairs. ConfigMaps can decouple configuration details such as configuration files and command-line arguments from pods. With ConfigMaps, you can avoid the need to rebuild container images whenever configurations are shared or updated between pods. ConfigMaps support multiple data formats, such as YAML and JSON. This facilitates application configuration management and ensures maintainability and scalability.

For details, see Creating a ConfigMap.

Secret

Secrets store sensitive data, such as passwords, keys, and certificates. Secrets are encrypted to enhance data security. Secrets can be mounted as data volumes or exposed as environment variables to be used in a pod. Secrets can also be used to store authentication information in a cluster. With secrets, you can manage sensitive data separately from the application code to reduce data leakage risks. In addition, you can centrally manage and dynamically update sensitive data to ensure cluster security and flexibility.

For details, see Creating a Secret.

Label

Labels are key-value pairs that are attached to objects such as pods, Services, and Deployments. Labels are used to add extra, semantic metadata to objects, enabling users and systems to effortlessly identify, organize, and manage resources.

Label Selector

Label selectors simplify resource management by allowing you to group and select resource objects based on their labels. This enables batch operations, such as traffic distribution, scaling, configuration updates, and monitoring, on the selected resource groups.

Annotation

Annotations are defined as key-value pairs, similar to labels. However, they serve a different purpose and have different constraints.

Labels are used for selecting and managing resources, following strict naming rules and defining metadata for Kubernetes objects. Label selectors use labels to help you select resources.

Annotations, in contrast, are additional information about resources. While Kubernetes does not directly use annotations to control resource behavior, external tools can access the information stored in annotations to extend Kubernetes functions.

PersistentVolume

A PersistentVolume (PV) is a storage resource in a cluster. It can be either a local disk or network storage. It exists independently from pods, so if a pod using a PV is deleted, the data stored in the PV will not be lost.

PersistentVolumeClaim

A PersistentVolumeClaim (PVC) is a request for PVs. It specifies the desired storage size and access mode. Kubernetes will automatically find a suitable PV that meets these requirements.

The relationship between PVCs and PVs is similar to that between pods and nodes. Pods consume node resources and PVCs consume PV resources.

Horizontal Pod Autoscaler for Workload Auto Scaling

Horizontal Pod Autoscaler (HPA) implements horizontal scaling of pods in Kubernetes. HPA enables a Kubernetes cluster to automatically scale in or out pods based on CPU usage, memory usage, or other specified metrics. You can set thresholds for target metrics for HPA to dynamically adjust the pod count to ensure the best application performance.

For details, see Creating an HPA Policy.

Cluster Autoscaler for Node Auto Scaling

Node auto scaling refers to automatically adjusting the number of nodes to adapt to changing workloads. Cluster Autoscaler automatically adds nodes when service load increases. As the service load decreases, underutilized nodes are automatically removed to reduce costs. It automatically adjusts the number of nodes in a cluster based on the workloads' resource needs, such as CPU and memory usage, and specified rules. This ensures efficient resource utilization and flexibility.

For details, see Creating a Node Scaling Policy.

Affinity and Anti-Affinity

Before an application is containerized, many of its components run on the same VM, and processes need to communicate with each other. During containerization, its processes are packed into different pods and each pod has its own lifecycle. For example, the business process is packed into a pod while the monitoring/logging process or local storage process is packed into another pod. If these pods run on distant nodes, routing between them will be costly and slow.

  • Affinity: Pods that closely related to each other are deployed on the same or the nearest node. This can reduce network loss. For instance, if an application needs to frequently communicate with some other application, you can define affinity rules to ensure that these two applications are placed close or even on the same node. By doing so, any potential performance degradation caused by slow routing can be avoided.
  • Anti-affinity: Pods of the same application spread across different nodes to achieve higher availability. Once a node is down, the application pods on other nodes are not affected. For example, if an application runs in multiple pods, you can define anti-affinity rules to deploy these pods on different nodes to guarantee the application HA.

For details, see Overview of Scheduling a Workload.

Resource Quota

Resource quotas enable administrators to set limits on the overall usage of resources, such as CPU, memory, disk space, and network bandwidth, within namespaces.

Resource Limit (LimitRange)

By default, all containers in Kubernetes have no CPU or memory limit. A LimitRange is a policy used to apply resource limits to objects, like pods, within a namespace.

It offers several constraints that can:

  • Restrict the minimum and maximum resource usage for each pod or container in a namespace.
  • Set minimum and maximum limits for the storage space that each PVC can request within a namespace.
  • Control the ratio between the request and limit for a resource within a namespace.
  • Set default requests and limits for compute resources within a namespace and automatically apply them to multiple containers at runtime.

Environment Variable

An environment variable is a variable that is configured in the runtime environment of a container. A maximum of 30 environment variables can be defined in a container template. You can modify environment variables even after workloads are deployed. Workload configuration is quite flexible.

The function of setting environment variables on CCE is the same as that of specifying ENV in a Dockerfile.

Chart

For your Kubernetes clusters, you can use Helm to manage software packages, which are called charts. Helm is to Kubernetes what apt is to Ubuntu or what yum is to CentOS. Helm allows you to quickly search for, download, and install charts.

Charts are a packaging format used by Helm. They describe a group of related cluster resource definitions, not an actual container image package. A Helm chart contains a series of YAML files used to deploy Kubernetes applications. You can customize some parameter settings in a Helm chart. When installing a chart, Helm deploys resources in the cluster based on the YAML files defined in the chart. Related container images are not included in the chart. They are pulled from the image repository defined in the YAML files.

Application developers need to push container image packages to the image repository, use Helm charts to package dependencies, and preset some key parameters to simplify application deployment.

Application users can use Helm to search for charts and customize parameter settings. Helm installs applications and their dependencies in the cluster based on the YAML files in a chart. Application users can search for, install, upgrade, roll back, and uninstall applications without defining complex deployment files.

For details, see Chart Overview.

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback