Updated on 2025-09-05 GMT+08:00

Storage Overview

Container Storage

The Kubernetes Container Storage Interface (CSI) is a standardized storage add-on framework launched by the Cloud Native Computing Foundation (CNCF). It aims to decouple storage systems from Kubernetes, enabling storage services to be accessed as add-ons without modifying the core code of Kubernetes. CCE container storage is built based on the CSI and supports a wide range of storage types, including block storage, file storage, and object storage, to meet the needs of both stateless and stateful applications. CSI is fully compatible with the native Kubernetes storage system and supports multiple storage forms like emptyDir, hostPath, secret, and ConfigMap.

CCE allows workload pods to use multiple types of storage:

  • In terms of implementation, storage supports Container Storage Interface (CSI) and Kubernetes native storage.

    Type

    Description

    CSI

    A standardized plugin mechanism (corresponding to CCE Everest) designed to decouple storage systems from orchestration platforms, addressing the issue of tight coupling between traditional in-tree storage plugins and Kubernetes core code. Its core design involves defining a unified gRPC interface specification, enabling storage vendors to develop custom storage plugins in an out-of-tree manner. This allows for independent building, compilation, and release of storage functionalities without embedding the plugin source code into the Kubernetes source code repository. You only need to declare your storage requirements through PVCs and PVs. The CSI driver handles the rest, enabling dynamic storage provisioning and management.

    From Kubernetes 1.13 onward, CSI has become the officially recommended implementation for storage plugins.

    Kubernetes native storage

    Runs in an in-tree manner, where the storage plugin code is embedded directly within the Kubernetes core code repository. This means that storage plugins are built, compiled, and released together with core Kubernetes components such as kubelet and kube-controller-manager. Storage plugins must be upgraded alongside Kubernetes. They cannot be developed, upgraded, or extended independently of the Kubernetes release cycle. This limits the flexibility of supporting third-party storage systems.

  • In terms of storage media, storage can be classified as cloud storage, local storage, and Kubernetes resource objects.

    Type

    Description

    Application Scenario

    Cloud storage

    The storage is provided by cloud service vendors and allows storage data to be stored in remote clusters through networks. This enables centralized management and elastic scaling of storage resources. In a cluster, you only need to declare your storage requirements through PVCs and PVs. The CSI add-on automatically handles access to cloud storage, eliminating the need to manually configure underlying resources.

    Data requires high availability or needs to be shared, for example, logs and media resources.

    Select a proper cloud StorageClass based on the application scenario. For details, see Cloud Storage Comparison.

    Local storage

    The storage media is the local data disk or memory of the node. The local PV is a customized StorageClass provided by CCE and mounted using PVCs and PVs through the CSI. Other StorageClasses are native to Kubernetes.

    Non-HA data requires high I/O and low latency.

    Select a proper local storage type based on the application scenario. For details, see Local Storage Comparison.

    Kubernetes resource objects

    ConfigMaps and secrets are resources created in clusters. They are special storage types and are provided by tmpfs (RAM-based file system) on the Kubernetes API server.

    ConfigMaps are used to inject configuration data to pods.

    Secrets are used to transmit sensitive information such as passwords to pods.

Cloud Storage Comparison

Item

EVS

SFS Turbo

OBS

Definition

A block storage service that operates in shared storage resource pools. It offers scalable, high-performance, and reliable storage volumes for cloud servers. EVS disks come in various specifications and are cost-effective.

Expandable to 320 TB, SFS Turbo provides fully hosted shared file storage, which is highly available and stable, to support small files and applications requiring low latency and high IOPS. You can use SFS Turbo in high-traffic websites, log storage, compression/decompression, DevOps, enterprise OA, and containerized applications.

Massive, secure, reliable, and cost-effective storage for any type and size of data. You can use it in enterprise backup/archiving, video on demand (VoD), video surveillance, and many other scenarios.

Data storage logic

Stores binary data and cannot directly store files. To store files, format the file system first.

Stores files and sorts and displays data in the hierarchy of files and folders.

Stores objects. Files directly stored automatically generate the system metadata, which can also be customized by users.

Access mode

Accessible only after being mounted to ECSs or BMSs and initialized.

Supports the Network File System (NFS) protocol (NFSv3 only). You can seamlessly integrate existing applications and tools with SFS Turbo.

Accessible through the Internet or Direct Connect (DC). Specify the bucket address and use transmission protocols such as HTTP or HTTPS.

Static storage volumes

Supported. For details, see Using an Existing EVS Disk Through a Static PV.

Supported. For details, see Using an Existing SFS Turbo File System Through a Static PV.

Supported. For details, see Using an Existing OBS Bucket Through a Static PV.

Dynamic storage volumes

Supported. For details, see Using an EVS Disk Through a Dynamic PV.

Supported for SFS Turbo subdirectories. For details, see (Recommended) Creating an SFS Turbo Subdirectory Using a Dynamic PV.

Supported. For details, see Using an OBS Bucket Through a Dynamic PV.

Features

Non-shared storage. Each volume can be mounted to only one node.

Shared storage featuring high performance and bandwidth

Shared, user-mode file system

Application scenarios

HPC, enterprise core cluster applications, enterprise application systems, and dev/test

NOTE:

HPC apps here require high-speed and high-IOPS storage, such as industrial design and energy exploration.

High-traffic websites, log storage, DevOps, and enterprise OA

Big data analytics, static website hosting, online video on demand (VoD), gene sequencing, intelligent video surveillance, backup and archiving, and enterprise cloud boxes (web disks)

Capacity

TB

General-purpose: TB

EB

Latency

1–2 ms

General-purpose: 1–5 ms

10 ms

Max. IOPS

2200–256000, depending on flavors

General-purpose: up to 100,000

Tens of millions

Bandwidth

MB/s

General-purpose: up to GB/s

TB/s

Local Storage Comparison

Item

Local PV

Local Ephemeral Volume

emptyDir

hostPath

Definition

Node's local disks form a storage pool (VolumeGroup) through LVM. LVM divides them into logical volumes (LVs) and mounts them to pods.

Kubernetes native emptyDir, where node's local disks form a storage pool (VolumeGroup) through LVM. LVs are created as the storage medium of emptyDir and mounted to pods. LVs deliver better performance than the default storage medium of emptyDir.

Kubernetes native emptyDir. Its lifecycle is the same as that of a pod. Memory can be specified as the storage medium. When the pod is deleted, the emptyDir volume is deleted and its data is lost.

Used to mount a file directory of the host where a pod is located to a specified mount point of the pod.

Features

Low-latency, high-I/O, and non-HA PV.

Storage volumes are non-shared storage and bound to nodes through labels. Therefore, storage volumes can be mounted only to a single pod.

Local ephemeral volume. The storage space is from local LVs.

Local ephemeral volume. The storage space comes from the local kubelet root directory or memory.

Used to mount files or directories of the host file system. Host directories can be automatically created. Pods can be migrated (not bound to nodes).

Storage volume mounting

Static storage volumes are not supported.

Using a Local PV Through a Dynamic PV is supported.

For details, see Local EV.

For details, see Temporary Path.

For details, see hostPath.

Application scenarios

High I/O requirements and built-in HA solutions of applications, for example, deploying MySQL in HA mode.

  • Scratch space, such as for a disk-based merge sort
  • Checkpointing a long computation for recovery from crashes
  • Saving the files obtained by the content manager container when web server container data is used
  • Scratch space, such as for a disk-based merge sort
  • Checkpointing a long computation for recovery from crashes
  • Saving the files obtained by the content manager container when web server container data is used

Requiring a node file, for example, if Docker is used, you can use hostPath to mount the /var/lib/docker path of the node.

NOTICE:

Avoid using hostPath volumes as much as possible, as they are prone to security risks. If hostPath volumes must be used, they can only be applied to files or directories and mounted in read-only mode.

Enterprise Project Support

To use this function, the Everest add-on must be upgraded to v1.2.33 or later.

  • Automatically creating storage:

    When creating EVS or OBS PVCs using a StorageClass in CCE, you can specify an enterprise project to assign the created storage resources (EVS disks and OBS) to. This enterprise project can either be the default one or the same one as the cluster belongs to.

    If no enterprise project is specified, the enterprise project specified in StorageClass will be used by default for creating storage resources.
    • For a custom StorageClass, you can specify an enterprise project in StorageClass. For details, see Through the Console. If no enterprise project is specified in StorageClass, the default enterprise project is used.
    • For the csi-disk and csi-obs storage classes provided by CCE, the created storage resources belong to the default enterprise project.
  • Use existing storage:

    When you create a PVC using a PV, ensure that everest.io/enterprise-project-id specified in the PVC and PV are the same because an enterprise project has been specified during storage resource creation. Otherwise, the PVC and PV cannot be bound.

Helpful Links