Updated on 2023-03-01 GMT+08:00



On-disk files in a container are ephemeral, which will be lost when the container crashes and are difficult to be shared between containers running together in a pod. The Kubernetes volume abstraction solves both of these problems. Volumes cannot be independently created, but defined in the pod spec.

All containers in a pod can access its volumes, but the volumes must have been mounted. Volumes can be mounted to any directory in a container.

The following figure shows how a storage volume is used between containers in a pod.

A volume will no longer exist if the pod to which it is mounted does not exist. However, files in the volume may outlive the volume, depending on the volume type.

Volume Types

Volumes can be classified into local volumes and cloud volumes.

  • Local volumes
    CCE supports the following five types of local volumes. For details about how to use them, see Using Local Disks as Storage Volumes.
    • emptyDir: an empty volume used for temporary storage
    • hostPath: mounts a directory on a host (node) to your container for reading data from the host.
    • ConfigMap: references the data stored in a ConfigMap for use by containers.
    • Secret: references the data stored in a secret for use by containers.
    • LocalPV: uses the local disk of a node to persistently store container data.
  • Cloud volumes

    CCE supports the following types of cloud volumes:

    • EVS
    • SFS Turbo
    • OBS
    • SFS


You can use Kubernetes Container Storage Interface (CSI) to develop plug-ins to support specific storage volumes.

CCE developed the storage add-on everest for you to use cloud storage services, such as EVS and OBS. You can install this add-on when creating a cluster.

PV and PVC

Kubernetes provides PersistentVolumes (PVs) and PersistentVolumeClaims (PVCs) to abstract details of how storage is provided from how it is consumed. You can request specific size of storage when needed, just like pods can request specific levels of resources (CPU and memory).

  • PV: A PV is a persistent storage volume in a cluster. Same as a node, a PV is a cluster-level resource.
  • PVC: A PVC describes a workload's request for storage resources. This request consumes existing PVs in the cluster. If there is no PV available, underlying storage and PVs are dynamically created. When creating a PVC, you need to describe the attributes of the requested persistent storage, such as the size of the volume and the read/write permissions.

You can bind PVCs to PVs in a pod so that the pod can use storage resources. The following figure shows the relationship between PVs and PVCs.

Figure 1 PVC-to-PV binding

PVs describes storage resources in the cluster. PVCs are requests for those resources. The following sections will describe how to use kubectl to connect to storage resources.

If you do not want to create storage resources or PVs manually, you can use StorageClasses.


StorageClass describes the storage class used in the cluster. You need to specify StorageClass when creating a PVC or PV. As of now, CCE provides storage classes such as csi-disk, csi-nas, and csi-obs by default. When defining a PVC, you can use a StorageClassName to create a PV of the corresponding type and automatically create underlying storage resources.

You can run the following command to query the storage classes that CCE supports. You can use the CSI plug-in provided by CCE to customize a storage class, which functions similarly as the default storage classes in CCE.

# kubectl get sc
NAME                PROVISIONER                     AGE
csi-disk            everest-csi-provisioner         17d          # Storage class for EVS disks
csi-disk-topology   everest-csi-provisioner         17d          # Storage class for EVS disks with delayed binding
csi-nas             everest-csi-provisioner         17d          # Storage class for SFS file systems
csi-obs             everest-csi-provisioner         17d          # Storage class for OBS buckets
csi-sfsturbo        everest-csi-provisioner         17d          # Storage class for SFS Turbo file systems
csi-local-topology  everest-csi-provisioner         17d          # Local PV

After a StorageClass is set, PVs can be automatically created and maintained. You only need to specify the StorageClass when creating a PVC, which greatly reduces the workload.

Currently, SFS file systems are sold out and cannot be automatically created using storage class csi-nas.

Cloud Services for Container Storage

CCE allows you to mount local and cloud storage volumes listed in Volume Types to your pods. Their features are described below.

Figure 2 Volume types supported by CCE
Table 1 Detailed description of cloud storage services





SFS Turbo


EVS offers scalable block storage for cloud servers. With high reliability, high performance, and rich specifications, EVS disks can be used for distributed file systems, dev/test environments, data warehouses, and high-performance computing (HPC) applications.

Expandable to petabytes, SFS provides fully hosted shared file storage, highly available and stable to handle data- and bandwidth-intensive applications in HPC, media processing, file sharing, content management, and web services.

OBS is a stable, secure, and easy-to-use object storage service that lets you inexpensively store data of any format and size. You can use it in enterprise backup/archiving, video on demand (VoD), video surveillance, and many other scenarios.

Expandable to 320 TB, SFS Turbo provides a fully hosted shared file storage, highly available and stable to support small files and applications requiring low latency and high IOPS. You can use SFS Turbo in high-traffic websites, log storage, compression/decompression, DevOps, enterprise OA, and containerized applications.

Data storage logic

Stores binary data and cannot directly store files. To store files, you need to format the file system first.

Stores files and sorts and displays data in the hierarchy of files and folders.

Stores objects. Files directly stored automatically generate the system metadata, which can also be customized by users.

Stores files and sorts and displays data in the hierarchy of files and folders.


Accessible only after being mounted to ECSs or BMSs and initialized.

Mounted to ECSs or BMSs using network protocols. A network address must be specified or mapped to a local directory for access.

Accessible through the Internet or Direct Connect (DC). You need to specify the bucket address and use transmission protocols such as HTTP and HTTPS.

Supports the Network File System (NFS) protocol (NFSv3 only). You can seamlessly integrate existing applications and tools with SFS Turbo.

Static provisioning





Dynamic provisioning




Not supported


Non-shared storage. Each volume can be mounted to only one node.

Shared storage featuring high performance and throughput

Shared, user-mode file system

Shared storage featuring high performance and bandwidth


HPC, enterprise core cluster applications, enterprise application systems, and dev/test


HPC apps here require high-speed and high-IOPS storage, such as industrial design and energy exploration.

HPC, media processing, content management, web services, big data, and analysis applications


HPC apps here require high bandwidth and shared file storage, such as gene sequencing and image rendering.

Big data analysis, static website hosting, online video on demand (VoD), gene sequencing, intelligent video surveillance, backup and archiving, and enterprise cloud boxes (web disks)

High-traffic websites, log storage, DevOps, and enterprise OA



SFS 1.0: PB




1-2 ms

SFS 1.0: 3-20 ms

10 ms

1-2 ms


33,000 for a single disk

SFS 1.0: 2K

Tens of millions




SFS 1.0: GB/s



Notes and Constraints

  • A single user can create a maximum of 100 OBS buckets on the console. If you have a large number of CCE workloads and you want to mount an OBS bucket to every workload, you may easily run out of buckets. In this scenario, you are advised to use OBS through the OBS API or SDK and do not mount OBS buckets to the workload on the console.
  • For clusters earlier than v1.19.10, if an HPA policy is used to scale out a workload with EVS volumes mounted, the existing pods cannot be read or written when a new pod is scheduled to another node.

    For clusters of v1.19.10 and later, if an HPA policy is used to scale out a workload with EVS volume mounted, a new pod cannot be started because EVS disks cannot be attached.

  • When you uninstall a subpath in a cluster of v1.19 or earlier, all folders in the subpath are traversed. If there are a large number of folders, the traversal takes a long time, so does the volume unmount. You are advised not to create too many folders in the subpath.
  • The maximum size of a single file in OBS mounted to a CCE cluster is far smaller than that defined by obsfs.

Notice on Using Add-ons

  • To use the CSI plug-in (the everest add-on in CCE), your cluster must be using Kubernetes 1.15 or later. This add-on is installed by default when you create a cluster of v1.15 or later. The FlexVolume plug-in (the storage-driver add-on in CCE) is installed by default when you create a cluster of v1.13 or earlier.
  • If your cluster is upgraded from v1.13 to v1.15, storage-driver is replaced by everest (v1.1.6 or later) for container storage. The takeover does not affect the original storage functions.
  • In version 1.2.0 of the everest add-on, key authentication is optimized when OBS is used. After the everest add-on is upgraded from a version earlier than 1.2.0, you need to restart all workloads that use OBS in the cluster. Otherwise, workloads may not be able to use OBS.

Differences Between CSI and FlexVolume Plug-ins

Table 2 CSI and FlexVolume

Kubernetes Solution

CCE Add-on





CSI was developed as a standard for exposing arbitrary block and file storage storage systems to containerized workloads. Using CSI, third-party storage providers can deploy plugins exposing new storage systems in Kubernetes without having to touch the core Kubernetes code. In CCE, the everest add-on is installed by default in clusters of Kubernetes v1.15 and later to connect to storage services (EVS, OBS, SFS, and SFS Turbo).

The everest add-on consists of two parts:

  • everest-csi-controller for storage volume creation, deletion, capacity expansion, and cloud disk snapshots
  • everest-csi-driver for mounting, unmounting, and formatting storage volumes on nodes

For details, see everest.

The everest add-on is installed by default in clusters of v1.15 and later. CCE will mirror the Kubernetes community by providing continuous support for updated CSI capabilities.



FlexVolume is an out-of-tree plugin interface that has existed in Kubernetes since version 1.2 (before CSI). CCE provided FlexVolume volumes through the storage-driver add-on installed in clusters of Kubernetes v1.13 and earlier versions. This add-on connects clusters to storage services (EVS, OBS, SFS, and SFS Turbo).

For details, see storage-driver.

For the created clusters of v1.13 or earlier, the installed FlexVolume plug-in (CCE add-on storage-driver) can still be used. CCE stops providing update support for this add-on, and you are advised to upgrade these clusters.

  • A cluster can use only one type of storage plug-ins.
  • The FlexVolume plug-in cannot be replaced by the CSI plug-in in clusters of v1.13 or earlier. You can only upgrade these clusters. For details, see Cluster Upgrade.

Checking Storage Add-ons

  1. Log in to the CCE console.
  2. In the navigation tree on the left, click Add-ons.
  3. Click the Add-on Instance tab.
  4. Select a cluster in the upper right corner. The default storage add-on installed during cluster creation is displayed.