PVs, PVCs, and Storage Classes
hostPath volumes are a form of persistent storage, as described earlier. However, they are tightly coupled to a specific node. If a node with a hostPath volume becomes faulty and the pods on that node are rescheduled to a different node, the pods may fail to access the original data since that data resides only on the initial node. This could result in lost data.
To ensure pods retain access to original data after rescheduling events (such as node failures or upgrades), Kubernetes requires the use of persistent network storage. Unlike local storage, network storage is not tied to any specific node and continues to provide data access even when pods are relocated. This ensures service continuity. Network storage comes in various forms, including block storage, file storage, and object storage. Cloud service providers typically offer more than three base storage types. To hide these differences, Kubernetes uses PVs and PVCs. They allow developers to request the required storage capacity and access mode, similar to requesting CPUs and memory, without worrying about the specific implementation. Kubernetes then automatically provisions and mounts the appropriate underlying storage. This design decouples storage resources from applications. Users only specify their needs, and the platform handles the allocation. As a result, deployment becomes far more flexible and portable.
- A PV represents a persistent storage volume. It typically defines a directory on the host node, such as a network file system (NFS) mount directory.
- A PVC specifies the attributes of the persistent storage that a pod requires, such as storage volume size and read/write permissions.
To allow a pod to use a PV, the Kubernetes cluster administrator needs to configure a network storage class and provides PV descriptors to Kubernetes. You only need to create a PVC and bind it with the volumes in the pod so that you can store data. The following figure shows the interaction between a PV and PVC.

CSI
Kubernetes offers Container Storage Interface (CSI), which can be used to develop custom CSI add-ons to support specific storage requirements while maintaining decoupling from underlying storage. For example, components developed by CCE such as everest-csi-controller and everest-csi-driver in the kube-system namespace, serve as storage controllers and drivers in Namespaces. With everest-csi-controller and everest-csi-driver, you can use cloud storage services such as EVS, SFS, and OBS.
$ kubectl get po --namespace=kube-system NAME READY STATUS RESTARTS AGE everest-csi-controller-6d796fb9c5-v22df 2/2 Running 0 9m11s everest-csi-driver-snzrr 1/1 Running 0 12m everest-csi-driver-ttj28 1/1 Running 0 12m everest-csi-driver-wtrk6 1/1 Running 0 12m
PVs
The following shows how to define a PV. In this example, a file system is created in SFS, with the file system ID 68e4a4fd-d759-444b-8265-20dc66c8c502 and the mount point sfs-nas01.cn-north-4b.myhuaweicloud.com:/share-96314776. To use this file system in CCE, create a PV.
apiVersion: v1 kind: PersistentVolume metadata: name: pv-example spec: accessModes: - ReadWriteMany # Read/write mode capacity: storage: 10Gi # PV capacity csi: driver: nas.csi.everest.io # Driver to be used fsType: nfs # Storage class volumeAttributes: everest.io/share-export-location: sfs-nas01.cn-north-4b.myhuaweicloud.com:/share-96314776 # Mount point volumeHandle: 68e4a4fd-d759-444b-8265-20dc66c8c502 # File system ID
Fields under csi in this example are specifically designed for CCE clusters.
Next, create the PV and view its details.
$ kubectl create -f pv.yaml persistentvolume/pv-example created $ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv-example 10Gi RWX Retain Available 4s
RECLAIM POLICY defines how a PV is managed after its bound PVC is released. Retain means the PV remains in the system, even after the bound PVC is deleted. Available indicates that the PV is available.
PVCs
Each PVC can only have one PV bound. The following is an example:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-example spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi # Storage capacity volumeName: pv-example # PV name
Create the PVC and view its details.
$ kubectl create -f pvc.yaml persistentvolumeclaim/pvc-example created $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc-example Bound pv-example 10Gi RWX 9s
The PVC is in the Bound state, and the value of VOLUME is pv-example, indicating that the PVC has a PV bound.
Then, check the PV status.
$ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv-example 10Gi RWX Retain Bound default/pvc-example 50s
The status of the PV is also Bound. The value of CLAIM is default/pvc-example, indicating that the PV is bound to the pvc-example PVC in the default namespace.
PVs are cluster-level resources and do not belong to any namespace. PVCs, in contrast, are namespace-level resources. PVs can be bound to PVCs in any namespace. In this example, the namespace name default followed by the PVC name is displayed under CLAIM.

Storage Classes
PVs and PVCs hide the differences between different types of physical storage, but creating a PV can be complex, especially the configuration of the csi field. In addition, PVs and PVCs are generally managed by the cluster administrator. It is inconvenient for you to configure varying attributes for them.
To solve this problem, Kubernetes supports dynamic PV provisioning, which automates the creation of PVs. The cluster administrator can deploy a PV provisioner and define storage classes. In this way, you can select a desired storage class when creating a PVC. The PVC then transfers the storage class to the PV provisioner, which automatically creates a PV. In CCE, storage classes such as csi-disk, csi-nas, and csi-obs are supported. By including the storageClassName field in a PVC, CCE ensures that PVs are dynamically provisioned, with underlying storage resources created automatically.
You can run the command below to obtain the storage classes that CCE supports. You can use the CSI add-ons provided by CCE to customize storage classes, which function similarly as the default storage classes in CCE.
# kubectl get sc NAME PROVISIONER AGE csi-disk everest-csi-provisioner 17d # Storage class for EVS disks csi-disk-topology everest-csi-provisioner 17d # Storage class for EVS disks with delayed association csi-nas everest-csi-provisioner 17d # Storage class for SFS file systems csi-obs everest-csi-provisioner 17d # Storage class for OBS buckets csi-sfsturbo everest-csi-provisioner 17d # Storage class for SFS Turbo file systems
Specify a storage class for creating a PVC.
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-sfs-auto-example spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi storageClassName: csi-nas # StorageClass

PVCs cannot be directly created using the csi-sfsturbo storage class. To use SFS Turbo storage, create an SFS Turbo file system and then statically provision a PV and PVC. For details, see Using an Existing SFS Turbo File System Through a Static PV.
Create the PVC and view the PVC and PV details.
$ kubectl create -f pvc2.yaml persistentvolumeclaim/pvc-sfs-auto-example created $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc-sfs-auto-example Bound pvc-1f1c1812-f85f-41a6-a3b4-785d21063ff3 10Gi RWX csi-nas 29s $ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-1f1c1812-f85f-41a6-a3b4-785d21063ff3 10Gi RWO Delete Bound default/pvc-sfs-auto-example csi-nas 20s
The command output shows that after a storage class is specified, a PVC and PV have been created and bound.
After a storage class is specified, PVs can be automatically created and maintained. You only need to specify StorageClassName when creating a PVC, which greatly reduces the workload. The types of StorageClassName vary by vendor. SFS is only used as an example.
Using a PVC in a Pod
With a PVC, you can easily use persistent storage in a pod. In the pod template, you simply reference the name of the PVC in the volumes field and mount it to the pod, as shown in the following example. You can also create a PVC for a StatefulSet. For details, see StatefulSets.
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: selector: matchLabels: app: nginx replicas: 2 template: metadata: labels: app: nginx spec: containers: - image: nginx:alpine name: container-0 volumeMounts: - mountPath: /tmp # Mount path name: pvc-sfs-example restartPolicy: Always volumes: - name: pvc-sfs-example persistentVolumeClaim: claimName: pvc-example # PVC name
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.