Creating a Volume from an Existing General-Purpose File System
General Purpose File System provides high-performance network-attached storage (NAS) file systems that can be shared. It is a good choice for large-capacity expansion and cost-sensitive services. This section describes how to use an existing general-purpose file system to create a PV and PVC for data persistence and sharing in the workload.
Prerequisites
- You have used ccictl to access CCI 2.0. For details, see ccictl Configuration Guide.
- You have created a general-purpose file system that is in the same VPC as the container.
- You have authorized the VPCs to access the general-purpose file system. For details, see Configuring Multi-VPC Access.
- Before using a general-purpose file system, you have created a VPC endpoint in the VPC of the container so that the container can access the file system. For details, see Configuring a VPC Endpoint.
Using an Existing General-Purpose File System Through ccictl
- Use ccictl to access the cluster.
- Create a PV.
- Create the pv-sfs.yaml file.
The following is an example:
apiVersion: cci/v2 kind: PersistentVolume metadata: name: pv-sfs # PV name spec: accessModes: - ReadWriteMany # Access mode. The value must be ReadWriteMany for general-purpose file system volumes. capacity: storage: 1Gi # Storage capacity. This parameter is only for verification. It must not be empty or 0, and the specified size will not take effect. csi: driver: nas.csi.everest.io # Storage driver for the mounting fsType: nfs volumeHandle: <your_volume_name> # Name of the general-purpose file system volumeAttributes: everest.io/share-export-location: <your_location> # Shared path of the general-purpose file system everest.io/sfs-version: sfs3.0 # sfs3.0 indicates that General Purpose File System is used. persistentVolumeReclaimPolicy: Retain # Reclaim policy storageClassName: csi-sfs # Storage class name. csi-sfs indicates General Purpose File System. mountOptions: [] # Mount options
Table 1 Key parameters Parameter
Mandatory
Type
Description
volumeHandle
Yes
String
Description: Name of the general-purpose file system.
Constraint: The value must be the name of the existing general-purpose file system.
everest.io/share-export-location
Yes
String
Description: Shared path of the general-purpose file system.
Constraint: The shared path must be in the following format:
{your_sfs30_name}.sfs3.{region}.myhuaweicloud.com:/{your_sfs30_name}
mountOptions
No
List
Description: Mount options of the general-purpose file system volume.
If this parameter is not specified, the configuration below is used by default. For details, see Configuring Mount Options for a File System Volume.
mountOptions: - vers=3 - timeo=600 - nolock - hard
persistentVolumeReclaimPolicy
Yes
String
Description: PV reclaim policy.
Constraint: Only the Retain policy is supported.
Retain: When a PVC is deleted, both the PV and underlying storage are retained. You need to manually delete these resources. After the PVC is deleted, the PV is in the Released state and cannot be bound to a PVC again.
storage
Yes
String
Description: PVC capacity, in Gi.
Constraint: For general-purpose file systems, this parameter is only for verification (cannot be empty or 0). Its value is fixed at 1, and any value you set does not take effect for general-purpose file systems.
storageClassName
Yes
String
Description: Storage class name.
Constraint: The value is fixed at csi-sfs, indicating General Purpose File System.
- Create a PV.
ccictl apply -f pv-sfs.yaml
- Create the pv-sfs.yaml file.
- Create a PVC.
- Create the pvc-sfs.yaml file.
apiVersion: cci/v2 kind: PersistentVolumeClaim metadata: name: pvc-sfs namespace: test-sfs-v1 spec: accessModes: - ReadWriteMany # Access mode. The value must be ReadWriteMany for general-purpose file system volumes. resources: requests: storage: 1Gi # Capacity of the general-purpose file system storageClassName: csi-sfs # Storage class name, which must be the same as that of the PV volumeName: pv-sfs # PV name
Table 2 Key parameters Parameter
Mandatory
Type
Description
accessModes
Yes
List
Description: Storage access mode.
Constraint: The value must be ReadWriteMany for general-purpose file system volumes.
storage
Yes
String
Description: PVC capacity, in Gi.
Constraint: The value must be the same as the storage size of the existing PV.
storageClassName
Yes
String
Description: Storage class name.
Constraint: The value must be the same as the storage class name of the PV in 2.a. It indicates that General Purpose File System is used.
volumeName
Yes
String
Description: PV name.
Constraint: The value must be the same as the PV name in 2.a.
- Create a PVC.
ccictl apply -f pvc-sfs.yaml
- Create the pvc-sfs.yaml file.
- Create an application.
- Create a file named web-demo.yaml. In this example, the file system volume is mounted to the /data path.
apiVersion: cci/v2 kind: Deployment metadata: name: web-demo namespace: test-sfs-v1 spec: replicas: 2 selector: matchLabels: app: web-demo template: metadata: labels: app: web-demo spec: containers: - name: container-1 image: nginx:latest resources: limits: cpu: 500m memory: 1Gi requests: cpu: 500m memory: 1Gi volumeMounts: - name: pvc-sfs-volume # Volume name, which must be the same as the volume name in the volumes field mountPath: /data # Location where the storage volume is mounted imagePullSecrets: - name: imagepull-secret volumes: - name: pvc-sfs-volume # Volume name, which can be changed as needed persistentVolumeClaim: claimName: pvc-sfs # PVC name
- Create a workload that the file system volume is mounted to.
ccictl apply -f web-demo.yaml
After the workload is created, data in the container mount directory will be persistently stored. Verify data persistence by referring to Verifying Data Persistence and Sharing.
- Create a file named web-demo.yaml. In this example, the file system volume is mounted to the /data path.
Verifying Data Persistence and Sharing
- View the deployed application and files.
- View the created pods.
ccictl get pod -n test-sfs-v1 | grep web-demo
The expected output is as follows:web-demo-68f8fdfd98-6v96b 1/1 Running 0 66s web-demo-68f8fdfd98-g5jmc 1/1 Running 0 66s
- Run the following commands in sequence to check the files in the /data path of the pods:
ccictl exec web-demo-68f8fdfd98-6v96b -n test-sfs-v1 -- ls /data ccictl exec web-demo-68f8fdfd98-g5jmc -n test-sfs-v1 -- ls /data
If no result is returned for both pods, there are no files in the /data path.
- View the created pods.
- Create a file named static in the /data path.
ccictl exec web-demo-68f8fdfd98-6v96b -n test-sfs-v1 -- touch /data/static
- View the file in the /data path.
ccictl exec web-demo-68f8fdfd98-6v96b -n test-sfs-v1 -- ls /data
The expected output is as follows:
static
- Verify data persistence.
- Delete the pod named web-demo-68f8fdfd98-6v96b.
ccictl delete pod web-demo-68f8fdfd98-6v96b -n test-sfs-v1
The expected output is as follows:
pod "web-demo-68f8fdfd98-6v96b" deleted
After the deletion, the Deployment controller automatically creates a replica.
- View the created pods.
ccictl get pod -n test-sfs-v1| grep web-demo
The expected output is as follows, in which web-demo-846b489584-d4d4j is the newly created pod:
web-demo-68f8fdfd98-g5jmc 1/1 Running 0 5m19s web-demo-68f8fdfd98-z2khr 1/1 Running 0 111s
- Check whether the file in the /data path of the new pod has been modified:
ccictl exec web-demo-68f8fdfd98-z2khr -n test-sfs-v1 -- ls /data
The expected output is as follows:
static
The static file is retained, indicating that the data can be stored persistently.
- Delete the pod named web-demo-68f8fdfd98-6v96b.
- Verify data sharing.
- View the created pods.
ccictl get pod -n test-sfs-v1| grep web-demo
The expected output is as follows:web-demo-68f8fdfd98-g5jmc 1/1 Running 0 8m21s web-demo-68f8fdfd98-z2khr 1/1 Running 0 4m53s
- Create a file named share in the /data path of either pod. In this example, select the pod named web-demo-68f8fdfd98-z2khr.
ccictl exec web-demo-68f8fdfd98-z2khr -n test-sfs-v1 -- touch /data/share
Check the files in the /data path of the pod.
ccictl exec web-demo-68f8fdfd98-z2khr -n test-sfs-v1 -- ls /data
The expected output is as follows:
share static
- Check whether the share file exists in the /data path of the other pod (web-demo-68f8fdfd98-g5jmc) to verify data sharing.
ccictl exec web-demo-68f8fdfd98-g5jmc -n test-sfs-v1 -- ls /data
The expected output is as follows:
share static
After you create a file in the /data path of a pod, if the file is also created in the /data path of the other pod, the two pods share the same volume.
- View the created pods.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot