Creating a Volume from an Existing Parallel File System
This section describes how to use an existing parallel file system to create a PV and PVC for data persistence and sharing in the workload.
Prerequisites
- You have set the access keys (Setting Access Keys (AK/SK) for Mounting a Parallel File System Volume).
- If you need to create access keys by running commands, you need to use ccictl to connect to CCI 2.0. For details, see ccictl Configuration Guide.
Constraints
- When parallel file systems are used, the group and permission of the mount point cannot be modified.
- Every time when a volume created from a parallel file system is mounted to a workload through the PVC, there will be a resident process at the backend for the volume. When a workload uses too many parallel file system volumes or reads and writes a large number of parallel file systems, resident processes will consume a significant amount of memory. To ensure that the workload can run normally, the number of parallel file system volumes used depends on the requested memory. For example, if the workload requests 4 GiB of memory, the workload can have no more than 4 parallel file system volumes.
- If parallel file systems are used, read-only mounting is not supported. For details about how to configure permissions for parallel file systems, see Permissions Configuration.
- Multiple PVs can use the same parallel file system if the following requirements are met:
- Multiple PVCs or PVs that use the same parallel file system cannot be mounted to a single pod. Doing so will cause pod startup failures, as not all PVCs can be mounted due to identical volumeHandle value.
- The persistentVolumeReclaimPolicy parameter in the PVs should be set to Retain. If any another value is used, when a PV is deleted, the associated parallel file system may be deleted. In this case, other PVs associated with the parallel file system will malfunction.
- If the parallel file system is repeatedly used, you must maintain data consistency. Enable application-layer isolation and protection for ReadWriteMany to prevent multiple clients from writing data to the same file, thereby avoiding data overwriting and loss.
- CCI 2.0 supports parallel file system volumes in CN-Hong Kong.
Using ccictl
- Use ccictl to connect to CCI 2.0.
- Create a PV.
- Create the pv-obs.yaml file.
apiVersion: cci/v2 kind: PersistentVolume metadata: name: pv-obs # PV name. spec: accessModes: - ReadWriteMany # The access mode must be ReadWriteMany for parallel file system volumes. capacity: storage: 1Gi # Storage capacity. This parameter is only for verification. Its value cannot be empty or 0, and any value you set does not take effect for parallel file systems. csi: driver: obs.csi.everest.io # Storage driver that the volume depends on. fsType: obsfs # Instance type. volumeHandle: <your_file_system_name> # Name of the parallel file system. nodePublishSecretRef: # Secret for the parallel file system volume. name: <your_secret_name> # Secret name. namespace: <your_namespace> # Name of the secret. persistentVolumeReclaimPolicy: Retain # Reclaim policy. storageClassName: csi-obs # Storage class name. mountOptions: [] # Mount options.
Table 1 Key parameters Parameter
Mandatory
Type
Description
accessModes
Yes
List
Description: Storage access mode.
Constraint: The value must be ReadWriteMany for parallel file system volumes.
driver
Yes
String
Description: Storage driver that the volume depends on.
Constraint: The value must be obs.csi.everest.io for parallel file system volumes.
fsType
Yes
String
Description: Storage instance type.
Constraint: The value must be obsfs, which indicates parallel file systems.
volumeHandle
Yes
String
Description: Name of the parallel file system.
Constraint: The value must be the name of an existing parallel file system.
nodePublishSecretRef
Yes
Object
Description: Access keys (AK/SK) that can be used to mount the parallel file system volume. You can create a secret using the AK/SK and use this secret for the PV. For details, see Setting Access Keys (AK/SK) for Mounting a Parallel File System Volume.
The following is an example:nodePublishSecretRef: name: secret-demo namespace: default
mountOptions
No
List
Description: Mount options. For details, see Setting Mount Options for a Parallel File System Volume.
persistentVolumeReclaimPolicy
Yes
String
Description: PV reclaim policy.
Constraints: Only the Retain policy is supported.
Retain: When a PVC is deleted, both the PV and underlying storage are retained. You need to manually delete these resources. After the PVC is deleted, the PV is in the Released state and cannot be bound to a PVC again.
storage
Yes
String
Description: Storage capacity, in Gi.
Constraint: For parallel file systems, this parameter is only for verification (cannot be empty or 0). Its value is fixed at 1, and any value you set does not take effect for parallel file systems.
storageClassName
Yes
String
Description: Storage class name of the parallel file system volume.
Constraint: The value is csi-obs for parallel file system volumes.
- Create a PV.
ccictl apply -f pv-obs.yaml
- Create the pv-obs.yaml file.
- Create a PVC.
- Create the pvc-obs.yaml file.
apiVersion: cci/v2 kind: PersistentVolumeClaim metadata: name: pvc-obs namespace: test-obs-v1 annotations: csi.storage.k8s.io/fstype: obsfs csi.storage.k8s.io/node-publish-secret-name: <your_secret_name> # Secret name. csi.storage.k8s.io/node-publish-secret-namespace: <your_namespace> # Namespace of the secret. spec: accessModes: - ReadWriteMany # The access mode of parallel file system volumes must be ReadWriteMany. resources: requests: storage: 1Gi storageClassName: csi-obs # Storage class name, which must be the same as that of the PV. volumeName: pv-obs # Name of the PV.
Table 2 Key parameters Parameter
Mandatory
Type
Description
fsType
Yes
String
Description: Storage instance type.
Constraint: The value must be obsfs, which indicates parallel file systems.
csi.storage.k8s.io/node-publish-secret-name
Yes
String
Description: Name of the secret specified for the PV.
csi.storage.k8s.io/node-publish-secret-namespace
Yes
String
Description: Namespace of the secret specified for the PV.
storage
Yes
String
Description: PVC capacity, in Gi.
Constraints
- For parallel file systems, this parameter is only for verification (cannot be empty or 0). Its value is fixed at 1, and any value you set does not take effect for parallel file systems.
- The value is the same as the capacity set for the PV in Table 1.
storageClassName
Yes
String
Description: Storage class name.
Constraint: The value must be the same as the storage class of the PV in Table 1. The storage class name of parallel file system volumes is csi-obs.
volumeName
Yes
String
Description: PV name.
Constraint: The value must be the same as the PV name in Table 1.
- Create a PVC.
ccictl apply -f pvc-obs.yaml
- Create the pvc-obs.yaml file.
- Create an application.
- Create a file named web-demo.yaml. In this example, the parallel file system volume is mounted to the /data path.
apiVersion: cci/v2 kind: Deployment metadata: name: web-demo namespace: test-obs-v1 spec: replicas: 2 selector: matchLabels: app: web-demo template: metadata: labels: app: web-demo spec: containers: - name: container-1 image: nginx:latest resources: limits: cpu: 500m memory: 1Gi requests: cpu: 500m memory: 1Gi volumeMounts: - name: pvc-obs-volume # Volume name, which must be the same as the volume name in the volumes field. mountPath: /data # Location where the storage volume is mounted. imagePullSecrets: - name: imagepull-secret volumes: - name: pvc-obs-volume # Volume name, which can be changed as needed. persistentVolumeClaim: claimName: pvc-obs # PVC name.
- Create a workload that the parallel file system volume is mounted to.
ccictl apply -f web-demo.yaml
After the workload is created, you can try to verify data persistence and sharing. For details, see Verifying Data Persistence and Sharing.
- Create a file named web-demo.yaml. In this example, the parallel file system volume is mounted to the /data path.
Verifying Data Persistence and Sharing
- View the deployed application and files.
- Run the following command to view the created pods:
ccictl get pod -n test-obs-v1| grep web-demo
The expected output is as follows:web-demo-7864446874-6d4lp 1/1 Running 0 52s web-demo-7864446874-xx6qh 1/1 Running 0 52s
- Run the following commands in sequence to check the files in the /data path of the pods:
ccictl exec web-demo-7864446874-6d4lp -n test-obs-v1 -- ls /data ccictl exec web-demo-7864446874-xx6qh -n test-obs-v1 -- ls /data
If no result is returned for both pods, no file exists in the /data path.
- Run the following command to view the created pods:
- Create a file named static in the /data path.
ccictl exec web-demo-7864446874-6d4lp -n test-obs-v1 -- touch /data/static
- View the files in the /data path.
ccictl exec web-demo-7864446874-6d4lp -n test-obs-v1 -- ls /data
The expected output is as follows:
static
- Verify data persistence.
- Delete the pod named web-demo-7864446874-6d4lp.
ccictl delete pod web-demo-7864446874-6d4lp -n test-obs-v1
The expected output is as follows:
pod "web-demo-7864446874-6d4lp" deleted
After the deletion, the Deployment controller automatically creates a replica.
- View the created pod.
ccictl get pod -n test-obs-v1 | grep web-demo
In the command output below, web-demo-7864446874-84slz is the created pod.web-demo-7864446874-84slz 1/1 Running 0 110s web-demo-7864446874-xx6qh 1/1 Running 0 8m47s
- Check whether the file in the /data path of the new pod has been modified:
ccictl exec web-demo-7864446874-84slz -n test-obs-v1 -- ls /data
The expected output is as follows:
static
The static file is retained, indicating that the data can be stored persistently.
- Delete the pod named web-demo-7864446874-6d4lp.
- Verify data sharing.
- View the created pods.
ccictl get pod -n test-obs-v1 | grep web-demo
The expected output is as follows:web-demo-7864446874-84slz 1/1 Running 0 6m3s web-demo-7864446874-xx6qh 1/1 Running 0 13m
- Create a file named share in the /data path of either pod. In this example, select the pod named web-demo-7864446874-84slz.
ccictl exec web-demo-7864446874-84slz -n test-obs-v1 -- touch /data/share
Check the files in the /data path of the pod.ccictl exec web-demo-7864446874-84slz -n test-obs-v1 -- ls /data
The expected output is as follows:
share static
- Check whether the share file exists in the /data path of another pod (web-demo-7864446874-xx6qh) as well to verify data sharing.
ccictl exec web-demo-7864446874-xx6qh -n test-obs-v1 -- ls /data
The expected output is as follows:
share static
After you create a file in the /data path of a pod, if the file is also created in the /data path of the other pod, the two pods share the same volume.
- View the created pods.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot