Local PVs and EVs
Persistent and ephemeral volumes can be created only when the cluster version is v1.21.2-r0 or later. The everest add-on version must be 1.2.29 or later to support ephemeral volumes and 1.2.31 or later to support persistent volumes.
PVs and EVs
CCE allows you to set data disks on nodes to PersistentVolumes (PVs) and ephemeral volumes (EVs).
- PVs form a storage pool (VolumeGroup) through LVM and then are divided into LVs, which will be used for container mounting. A PV that uses a persistent storage volume as the medium may be referred to as a local PV.
- An EV can be used as the medium of emptyDir. EVs form a storage pool (VolumeGroup) through LVM and then are divided into LVs, which will be used for container mounting. Compared with the default medium of the native emptyDir, an EV provides better performance.
PVs and EVs support the following write modes:
- Linear: A linear logical volume integrates one or more physical volumes. Data is written to the next physical volume when the previous one is used up.
- Striped: A striped logical volume stripes data into blocks of the same size and stores them in multiple physical volumes in sequence, allowing data to be concurrently read and written. Select this option only when multiple volumes exist.
Notes and Constraints
- Removing, deleting, resetting, or scaling in a node will cause the PVC/PV data of the local PV associated with the node to be lost and cannot be restored. In this case, the PVC/PV cannot be used. For details, see Removing a Node, Deleting a Node, Resetting a Node, and Scaling In a Node. When a node is removed, deleted, reset, or scaled in, the pod that uses the local PV is evicted from the node. In addition, a new pod will be created and stays in the pending state. This is because the PVC used by the pod has a node label and cannot be scheduled due to a conflict. After the node is reset, the pod may be scheduled to the reset node. In this case, the pod is always in the creating state because the underlying logical volume corresponding to the PVC does not exist.
- After a PV or an EV is created, do not manually delete the corresponding storage pool or detach the data disk. Otherwise, exceptions such as data loss may occur.
- To use ephemeral storage volumes, ensure that the /var/lib/kubelet/pods/ directory is not mounted to the pod on the node. Otherwise, the pod, mounted with such volumes, may fail to be deleted.
Adding a PV or an EV
There are two ways to add a PV or an EV.
- When creating a node, you can add a data disk to the node as a PV or an EV, as shown in the following figure.

- If a PV or an EV is not added during node creation, or the capacity of the current volume is insufficient, you can add disks to the node on the ECS console and import the disks to the storage pool on the CCE console.
- Storage pools in striped mode do not support scale-out. After scale-out, fragmented space may be generated and cannot be used.
- Storage pools cannot be scaled in or deleted.
- If disks in a storage pool on a node are deleted, the storage pool will become abnormal.

You can select a write mode during the import.

Using a PV
A local PV can use StorageClass to dynamically create a PVC. The StorageClass name is csi-local-topology. The behavior of csi-local-topology is greatly different from that of other storage classes such as csi-disk. The behavior of csi-local-topology is as follows:
The node.kubernetes.io/local-storage-persistent label is automatically added to the node to which the local PV is added. If the pod uses a PVC of the csi-local-topology type, the scheduler schedules the pod to the node with the node.kubernetes.io/local-storage-persistent label, that is, the node with the local PV installed.
- After a PVC is created, the PVC is always in the Pending state and a PV will not be created immediately. After a pod uses the PVC and the scheduler schedules the pod to the node, the everest add-on creates the logical volume required by localpv and returns a PV. The PVC is bound to the PV. After the mounting is successful, the pod is started.
- When creating a workload, you can choose to dynamically create a PVC. After the PVC is dynamically created, the scheduler schedules the pod to the node, and the everest add-on creates a logical volume and returns a PV. The PVC is bound to the PV. After the mounting is successful, the pod is started.
- When deleting a workload, you can choose not to delete the PVC in use. In this way, the PVC can be used again when the workload is created next time. Therefore, the pod will be scheduled to the node associated with the PVC.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-local-example
namespace: default
spec:
accessModes:
- ReadWriteOnce # The value must be ReadWriteOnce.
resources:
requests:
storage: 10Gi # Size of the local PV.
storageClassName: csi-local-topology # The storage class type is csi-local-topology. Using an EV
When creating a workload, set the medium of emptyDir to LocalVolume, indicating that an EV is used.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: container-1
image: nginx:alpine
resources:
requests:
cpu: 250m
memory: 512Mi
limits:
cpu: 250m
memory: 512Mi
volumeMounts:
- name: vol-164284390917275733
mountPath: /tmp
imagePullSecrets:
- name: default-secret
volumes:
- name: vol-164284390917275733
emptyDir:
medium: LocalVolume # The EV is used.
sizeLimit: 1Gi Handling EV Exceptions
If a user manually detaches a disk from ECS or manually runs the vgremove command, the EV storage pool may become abnormal. To resolve this issue, set the node to be unschedulable by following the procedure described in Node Scheduling Settings and then reset the node.
Last Article: Snapshots and Backups
Next Article: Using a Custom AK/SK to Mount an OBS Volume
Did this article solve your problem?
Thank you for your score!Your feedback would help us improve the website.