Updated on 2025-07-17 GMT+08:00

Storage Overview

Container Storage

CCE Autopilot container storage is implemented based on Kubernetes container storage APIs (CSI). CCE Autopilot integrates multiple types of cloud storage and covers different application scenarios. It is fully compatible with Kubernetes native storage services, such as emptyDir, hostPath, secret, and ConfigMap.

Figure 1 Container storage types

CCE Autopilot allows workload pods to use multiple types of storage:

  • In terms of implementation, storage supports Container Storage Interface (CSI) and Kubernetes native storage.

    Type

    Description

    CSI

    An out-of-tree volume add-on, which specifies the standard container storage API and allows storage vendors to use standard custom storage plugins that are mounted using PVCs and PVs without the need to add their plugin source code to the Kubernetes repository for unified build, compilation, and release. CSI is a recommended in Kubernetes 1.13 and later versions.

    Kubernetes native storage

    An "in-tree" volume add-on that is built, compiled, and released with the Kubernetes repository.

  • In terms of storage media, storage can be classified as cloud storage, local storage, and Kubernetes resource objects.

    Type

    Description

    Application Scenario

    Cloud storage

    The storage media is provided by storage vendors. Storage volumes of this type are mounted using PVCs and PVs.

    Data requires high availability or needs to be shared, for example, logs and media resources.

    Select a proper cloud storage type based on the application scenario. For details, see Cloud Storage Comparison.

    Local storage

    Only emptyDir is supported. The lifecycle of an emptyDir volume is the same as that of a pod. Memory can be specified as the storage medium. When the pod is deleted, the emptyDir volume is deleted, and the data is lost.

    Non-HA data that requires high I/O and low latency.

    For details, see Local Storage.

    Kubernetes resource objects

    ConfigMaps and secrets are resources created in clusters. They are special storage types and are provided by tmpfs (RAM-based file system) on the Kubernetes API server.

    ConfigMaps are used to inject configuration data to pods.

    Secrets are used to transmit sensitive information such as passwords to pods.

Cloud Storage Comparison

Item

EVS

SFS

SFS Turbo

OBS

Definition

EVS offers scalable block storage for cloud servers. With high reliability, high performance, and rich specifications, EVS disks can be used for distributed file systems, dev/test environments, data warehouses, and high-performance computing (HPC) applications.

Expandable to petabytes, SFS provides fully hosted shared file storage, highly available and stable to handle data- and bandwidth-intensive applications in HPC, media processing, file sharing, content management, and web services.

Expandable to 320 TB, SFS Turbo provides a fully hosted shared file storage, which is highly available and stable, to support small files and applications requiring low latency and high IOPS. You can use SFS Turbo in high-traffic websites, log storage, compression/decompression, DevOps, enterprise OA, and containerized applications.

OBS provides massive, secure, and cost-effective data storage for you to store data of any type and size. It is suitable for various scenarios, such as enterprise backup/archiving, video on demand (VoD), and video security.

Data storage logic

Stores only binary data. To store files, you need to format the file system first.

Stores files, and sorts and displays data in the hierarchy of files and folders.

Stores files, and sorts and displays data in the hierarchy of files and folders.

Stores data as objects with metadata and unique identifiers. You can upload files directly to OBS. The system can generate metadata for files, or you can customize the metadata for files.

Access method

EVS disks can only be used and accessed from applications after being attached to ECSs or BMSs and initialized.

The can be mounted to ECSs or BMSs using network protocols. A network address must be specified or mapped to a local directory for access.

SFS Turbo supports the Network File System (NFS) protocol (NFSv3 only). You can seamlessly integrate existing applications and tools with SFS Turbo.

OBS is accessible through the Internet or Direct Connect. The bucket address must be specified for access, and transfer protocols HTTP and HTTPS are used.

Static storage volumes

Supported. For details, see Using an Existing EVS Disk Through a Static PV.

Supported. For details, see Using an Existing File System Through a Static PV.

Supported. For details, see Using an Existing SFS Turbo File System Through a Static PV.

Supported. For details, see Using an Existing OBS Bucket or Parallel File System Through a Static PV.

Dynamic storage volumes

Supported. For details, see Using an EVS Disk Through a Dynamic PV.

Supported. For details, see Using an SFS File System Through a Dynamic PV.

Supported by SFS Turbo subdirectories but not by SFS Turbo. For details, see (Recommended) Creating an SFS Turbo Subdirectory Using a Dynamic PV.

Supported. For details, see Using an OBS Bucket or Parallel File System Through a Dynamic PV.

Highlights

Non-shared storage. Each volume can be mounted to only one node.

Shared storage. High-performance and high-throughput storage services are provided.

Shared storage. High-performance and high-bandwidth storage services are provided.

Shared storage and user-mode file system.

Application scenarios

HPC, enterprise core cluster applications, enterprise application systems, and development and testing

NOTE:

HPC apps here require high-speed and high-IOPS storage, such as industrial design and energy exploration.

HPC, media processing, content management, web services, big data, and analysis applications

NOTE:

HPC apps here require high bandwidth and shared file storage, such as gene sequencing and image rendering.

High-traffic websites, log storage, DevOps, and enterprise OA

Big data analytics, static website hosting, online video on demand (VoD), gene sequencing, intelligent video security, backup and archiving, and enterprise cloud boxes (web disks)

Capacity

TB

SFS 1.0: PB

General Purpose File System (formerly SFS 3.0 Capacity-Oriented): EB

General-purpose: TB

EB

Latency

1–2 ms

SFS 1.0: 3–20 ms

General Purpose File System (formerly SFS 3.0 Capacity-Oriented): 10 ms

General-purpose: 1–5 ms

10 ms

Max. IOPS

2,200–256,000, depending on flavors

SFS 1.0: 2,000

General Purpose File System (formerly SFS 3.0 Capacity-Oriented): millions

General-purpose: up to 100,000

Tens of millions

Bandwidth

MB/s

SFS 1.0: GB/s

General Purpose File System (formerly SFS 3.0 Capacity-Oriented): TB/s

General-purpose: up to GB/s

TB/s

Local Storage

An emptyDir volume provides ephemeral storage for pods. Its lifecycle is the same as that of a pod. Memory can be specified as the storage medium. When the pod is deleted, the emptyDir volume is deleted, and the data is lost. For details, see emptyDir.

Highlights: emptyDir volumes are local ephemeral volumes. The storage space comes from the local kubelet root directory or memory.

Application scenarios:

  • Scratch space, such as for a disk-based merge sort
  • checkpointing a long computation for recovery from crashes
  • Saving the files obtained by the content manager container when web server container data is used

Support for Enterprise Projects

  • Creating storage automatically:

    When creating EVS or OBS PVCs using a StorageClass in CCE Autopilot, you can specify an enterprise project to assign the created storage resources (EVS disks and OBS) to. This enterprise project can either be the default one or the same one as the cluster belongs to.

    If no enterprise project is specified, the enterprise project specified in StorageClass will be used by default for creating storage resources. For the csi-disk and csi-obs storage classes provided by CCE Autopilot, the created storage resources belong to the default enterprise project.

  • Using existing storage:

    When you create a PVC using a PV, ensure that everest.io/enterprise-project-id specified in the PVC and PV are the same because an enterprise project has been specified during storage resource creation. Otherwise, the PVC and PV cannot be bound.