Updated on 2024-01-26 GMT+08:00

Overview

Container Storage

CCE container storage is implemented based on Kubernetes container storage APIs (CSI). CCE integrates multiple types of cloud storage and covers different application scenarios. CCE is fully compatible with Kubernetes native storage services, such as emptyDir, hostPath, secret, and ConfigMap.

Figure 1 Container storage type
CCE allows you to mount cloud storage volumes to your pods. Their features are described below.
Table 1 Cloud storage comparison

Dimension

EVS

SFS

SFS Turbo

OBS

Definition

EVS offers scalable block storage for cloud servers. With high reliability, high performance, and rich specifications, EVS disks can be used for distributed file systems, dev/test environments, data warehouses, and high-performance computing (HPC) applications.

Expandable to petabytes, SFS provides fully hosted shared file storage, highly available and stable to handle data- and bandwidth-intensive applications in HPC, media processing, file sharing, content management, and web services.

Expandable to 320 TB, SFS Turbo provides a fully hosted shared file storage, which is highly available and stable, to support small files and applications requiring low latency and high IOPS. You can use SFS Turbo in high-traffic websites, log storage, compression/decompression, DevOps, enterprise OA, and containerized applications.

Object Storage Service (OBS) provides massive, secure, and cost-effective data storage for you to store data of any type and size. You can use it in enterprise backup/archiving, video on demand (VoD), video surveillance, and many other scenarios.

Data storage logic

Stores binary data and cannot directly store files. To store files, format the file system first.

Stores files and sorts and displays data in the hierarchy of files and folders.

Stores files and sorts and displays data in the hierarchy of files and folders.

Stores objects. Files directly stored automatically generate the system metadata, which can also be customized by users.

Access mode

Accessible only after being mounted to ECSs or BMSs and initialized.

Mounted to ECSs or BMSs using network protocols. A network address must be specified or mapped to a local directory for access.

Supports the Network File System (NFS) protocol (NFSv3 only). You can seamlessly integrate existing applications and tools with SFS Turbo.

Accessible through the Internet or Direct Connect (DC). Specify the bucket address and use transmission protocols such as HTTP or HTTPS.

Static provisioning

Supported. For details, see Using an Existing EVS Disk Through a Static PV.

Supported. For details, see Using an Existing SFS File System Through a Static PV.

Supported. For details, see Using an Existing SFS Turbo File System Through a Static PV.

Supported. For details, see Using an Existing OBS Bucket Through a Static PV.

Dynamic provisioning

Supported. For details, see Using an EVS Disk Through a Dynamic PV.

Supported. For details, see Using an SFS File System Through a Dynamic PV.

Not supported

Supported. For details, see Using an OBS Bucket Through a Dynamic PV.

Features

Non-shared storage. Each volume can be mounted to only one node.

Shared storage featuring high performance and throughput

Shared storage featuring high performance and bandwidth

Shared, user-mode file system

Usage

HPC, enterprise core cluster applications, enterprise application systems, and dev/test

NOTE:

HPC apps here require high-speed and high-IOPS storage, such as industrial design and energy exploration.

HPC, media processing, content management, web services, big data, and analysis applications

NOTE:

HPC apps here require high bandwidth and shared file storage, such as gene sequencing and image rendering.

High-traffic websites, log storage, DevOps, and enterprise OA

Big data analytics, static website hosting, online video on demand (VoD), gene sequencing, intelligent video surveillance, backup and archiving, and enterprise cloud boxes (web disks)

Capacity

TB

SFS 1.0: PB

General-purpose: TB

EB

Latency

1–2 ms

SFS 1.0: 3–20 ms

General-purpose: 1–5 ms

10 ms

IOPS/TPS

33,000 for a single disk

SFS 1.0: 2,000

General-purpose: up to 100,000

Tens of millions

Bandwidth

MB/s

SFS 1.0: GB/s

General-purpose: up to GB/s

TB/s

Enterprise Project Support

To use this function, the everest add-on must be upgraded to v1.2.33 or later.

  • Automatically creating storage:

    CCE allows you to specify an enterprise project when creating EVS disks and OBS PVCs. The created storage resources (EVS disks and OBS) belong to the specified enterprise project. The enterprise project can be the enterprise project to which the cluster belongs or the default enterprise project.

    If no enterprise project is specified, the enterprise project specified in StorageClass will be used by default for creating storage resources.
    • For a custom StorageClass, you can specify an enterprise project in StorageClass. For details, see Specifying an Enterprise Project for Storage Classes. If no enterprise project is specified in StorageClass, the default enterprise project is used.
    • For the csi-disk and csi-obs storage classes provided by CCE, the created storage resources belong to the default enterprise project.
  • Use existing storage:

    When you create a PVC using a PV, ensure that everest.io/enterprise-project-id specified in the PVC and PV are the same because an enterprise project has been specified during storage resource creation. Otherwise, the PVC and PV cannot be bound.