Using SFS Turbo
SFS Turbo is a shared file system with high availability and durability. It is suitable for applications that contain massive small files and require low latency, and high IOPS. This section describes how to use an existing SFS Turbo file system to create PVs and PVCs for data persistence and sharing in workloads.
Prerequisites
- You have created a cluster and installed the CCE Container Storage (Everest) add-on in the cluster.
- You have created an available SFS Turbo file system, and the SFS Turbo file system and the cluster are in the same VPC.
Constraints
- Multiple PVs can use the same SFS Turbo file system with the following restrictions:
- Do not mount multiple PVCs or PVs that use the same underlying SFS Turbo volume to a single pod. Doing so will cause pod startup failures, as not all PVCs can be mounted due to identical volumeHandle value.
- Set persistentVolumeReclaimPolicy parameter in the PVs to Retain. If you use any other values, when a PV is deleted, its associated underlying SFS Turbo file system may be deleted. In this case, other PVs associated with the underlying file system malfunction.
- When a file system is repeatedly used, enable isolation and protection for ReadWriteMany at the application layer to prevent data overwriting and loss.
- For SFS Turbo storage, the yearly/monthly SFS Turbo resources will not be reclaimed when the cluster or PVC is deleted. Reclaim the resources on the SFS Turbo console.
Using an Existing SFS Turbo File System Through kubectl
- Use kubectl to access the cluster. For details, see Accessing a Cluster Using kubectl.
- Create a PV.
- Create the pv-sfsturbo.yaml file.
apiVersion: v1 kind: PersistentVolume metadata: annotations: pv.kubernetes.io/provisioned-by: everest-csi-provisioner name: pv-sfsturbo # PV name spec: accessModes: - ReadWriteMany # Access mode. The value must be ReadWriteMany for SFS Turbo. capacity: storage: 500Gi # SFS Turbo volume capacity csi: driver: sfsturbo.csi.everest.io fsType: nfs volumeHandle: <your_volume_id> # SFS Turbo volume ID volumeAttributes: storage.kubernetes.io/csiProvisionerIdentity: everest-csi-provisioner everest.io/share-export-location: {your_location} # Shared path on the SFS Turbo volume persistentVolumeReclaimPolicy: Retain storageClassName: csi-sfsturbo
- Run the following command to create a PV:
kubectl apply -f pv-sfsturbo.yaml
- Create the pv-sfsturbo.yaml file.
- Create a PVC.
- Create the pvc-sfsturbo.yaml file.
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-sfsturbo namespace: default annotations: volume.beta.kubernetes.io/storage-provisioner: everest-csi-provisioner spec: accessModes: - ReadWriteMany # The value must be ReadWriteMany for SFS Turbo. resources: requests: storage: 500Gi # SFS Turbo volume capacity storageClassName: csi-sfsturbo # StorageClass name of the SFS Turbo file system, which must be the same as that of the PV volumeName: pv-sfsturbo # PV name
- Run the following command to create a PVC:
kubectl apply -f pvc-sfsturbo.yaml
- Create the pvc-sfsturbo.yaml file.
- Create a workload.
- Create a file named web-demo.yaml. In this example, the SFS Turbo volume is mounted to the /data path.
apiVersion: apps/v1 kind: Deployment metadata: name: web-demo namespace: default spec: replicas: 2 selector: matchLabels: app: web-demo template: metadata: labels: app: web-demo spec: containers: - name: container-1 image: nginx:latest volumeMounts: - name: pvc-sfsturbo-volume # Volume name. This name must be the same as that in the volumes field. mountPath: /data # Path that the storage volume is mounted to imagePullSecrets: - name: default-secret volumes: - name: pvc-sfsturbo-volume # Volume name. You can change it as needed. persistentVolumeClaim: claimName: pvc-sfsturbo # Name of the created PVC
- Run the following command to create a workload that the SFS Turbo volume is mounted to:
kubectl apply -f web-demo.yaml
- Create a file named web-demo.yaml. In this example, the SFS Turbo volume is mounted to the /data path.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot