Using a Local EV
Local Ephemeral Volumes (EVs) are stored in EV storage pools. Local EVs deliver better performance than the default storage medium of native emptyDir and support scale-out.
Prerequisites
- You have created a cluster and installed the CSI add-on (Everest) in the cluster.
- To create a cluster using commands, ensure kubectl is used. For details, see Connecting to a Cluster Using kubectl.
- To use a local EV, import a data disk of a node to the local EV storage pool. For details, see Importing an EV to a Storage Pool.
Notes and Constraints
- Local EVs are supported only when the cluster version is v1.21.2-r0 or later and the Everest add-on version is 1.2.29 or later.
- Do not manually delete the corresponding storage pool or detach data disks from the node. Otherwise, exceptions such as data loss may occur.
- Ensure that the /var/lib/kubelet/pods/ directory is not mounted to the pod on the node. Otherwise, the pod, mounted with such volumes, may fail to be deleted.
Using the Console to Mount a Local EV
- Log in to the CCE console and click the cluster name to access the cluster console.
- Choose Workloads in the navigation pane. In the right pane, click the Deployments tab.
- Click Create Workload in the upper right corner. On the displayed page, click Data Storage in the Container Settings area and click Add Volume to select Local Ephemeral Volume (emptyDir).
- Mount and use storage volumes, as shown in Table 1. For details about other parameters, see Workloads.
Table 1 Mounting a local EV Parameter
Description
Capacity
Capacity of the requested storage volume.
Mount Path
Enter a mount path, for example, /tmp.
This parameter specifies a container path to which a data volume will be mounted. Do not mount the volume to a system directory such as / or /var/run. This may lead to container errors. Mount the volume to an empty directory. If the directory is not empty, ensure that there are no files that affect container startup. Otherwise, the files will be replaced, leading to container startup failures or workload creation failures.NOTICE:If a volume is mounted to a high-risk directory, use an account with minimum permissions to start the container. Otherwise, high-risk files on the host may be damaged.
Subpath
Enter the subpath of the storage volume and mount a path in the storage volume to the container. In this way, different folders of the same storage volume can be used in a single pod. tmp, for example, indicates that data in the mount path of the container is stored in the tmp folder of the storage volume. If this parameter is left blank, the root path will be used by default.
Permission
- Read-only: You can only read the data in the mounted volumes.
- Read-write: You can modify the data volumes mounted to the path. Newly written data will not be migrated if the container is migrated, which may cause data loss.
- After the configuration, click Create Workload.
Mounting a Local EV Through kubectl
- Use kubectl to access the cluster. For details, see Connecting to a Cluster Using kubectl.
- Create a file named nginx-emptydir.yaml and edit it.
vi nginx-emptydir.yaml
Content of the YAML file:
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-emptydir namespace: default spec: replicas: 2 selector: matchLabels: app: nginx-emptydir template: metadata: labels: app: nginx-emptydir spec: containers: - name: container-1 image: nginx:latest volumeMounts: - name: vol-emptydir # Volume name, which must be the same as the volume name in the volumes field. mountPath: /tmp # Location where the emptyDir is mounted imagePullSecrets: - name: default-secret volumes: - name: vol-emptydir # Volume name, which can be customized emptyDir: medium: LocalVolume # If the disk medium of emptyDir is set to LocalVolume, the local EV is used. sizeLimit: 1Gi # Volume capacity
- Create a workload.
kubectl apply -f nginx-emptydir.yaml
Handling Local EV Exceptions
If a user manually detaches a disk from ECS or manually runs the vgremove command, the EV storage pool may malfunction. To resolve this issue, set the node to be unschedulable by following the procedure described in Configuring a Node Scheduling Policy in One-Click Mode and then reset the node.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.