Updated on 2024-01-26 GMT+08:00

Using an SFS File System Through a Dynamic PV

This section describes how to use storage classes to dynamically create PVs and PVCs and implement data persistence and sharing in workloads.

(kubectl) Automatically Creating an SFS File System

  1. Use kubectl to connect to the cluster.
  2. Use StorageClass to dynamically create a PVC and PV.

    1. Create the pvc-sfs-auto.yaml file.
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: pvc-sfs-auto
        namespace: default
        annotations: 
          everest.io/crypt-key-id: <your_key_id>      # (Optional) ID of the key for encrypting file systems
          everest.io/crypt-alias: sfs/default         # (Optional) Key name. Mandatory for encrypting volumes
          everest.io/crypt-domain-id: <your_domain_id>   # (Optional) ID of the tenant to which an encrypted volume belongs. Mandatory for encrypting volumes
      spec:
        accessModes:
          - ReadWriteMany             # The value must be ReadWriteMany for SFS.
        resources:
          requests:
            storage: 1Gi             # SFS volume capacity.
        storageClassName: csi-nas    # The storage class type is SFS.
      Table 1 Key parameters

      Parameter

      Mandatory

      Description

      storage

      Yes

      Requested capacity in the PVC, in Gi.

      For SFS, this field is used only for verification (cannot be empty or 0). Its value is fixed at 1, and any value you set does not take effect for SFS file systems.

      everest.io/crypt-key-id

      No

      This parameter is mandatory when an SFS system is encrypted. Enter the encryption key ID selected during SFS system creation. You can use a custom key or the default key named sfs/default.

      To obtain a key ID, log in to the DEW console, locate the key to be encrypted, and copy the key ID.

      everest.io/crypt-alias

      No

      Key name, which is mandatory when you create an encrypted volume.

      To obtain a key name, log in to the DEW console, locate the key to be encrypted, and copy the key name.

      everest.io/crypt-domain-id

      No

      ID of the tenant to which the encrypted volume belongs. This parameter is mandatory for creating an encrypted volume.

      To obtain a tenant ID, hover the cursor over the username in the upper right corner of the ECS console, choose My Credentials, and copy the account ID.

    2. Run the following command to create a PVC:
      kubectl apply -f pvc-sfs-auto.yaml

  3. Create an application.

    1. Create a file named web-demo.yaml. In this example, the SFS volume is mounted to the /data path.
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: web-demo
        namespace: default
      spec:
        replicas: 2
        selector:
          matchLabels:
            app: web-demo
        template:
          metadata:
            labels:
              app: web-demo
          spec:
            containers:
            - name: container-1
              image: nginx:latest
              volumeMounts:
              - name: pvc-sfs-volume    # Volume name, which must be the same as the volume name in the volumes field.
                mountPath: /data  # Location where the storage volume is mounted.
            imagePullSecrets:
              - name: default-secret
            volumes:
              - name: pvc-sfs-volume    # Volume name, which can be customized.
                persistentVolumeClaim:
                  claimName: pvc-sfs-auto    # Name of the created PVC.
    2. Run the following command to create an application to which the SFS volume is mounted:
      kubectl apply -f web-demo.yaml

      After the workload is created, the data in the container mount directory will be persistently stored. Verify the storage by referring to Verifying Data Persistence and Sharing.

Verifying Data Persistence and Sharing

  1. View the deployed applications and files.

    1. Run the following command to view the created pod:
      kubectl get pod | grep web-demo
      Expected output:
      web-demo-846b489584-mjhm9   1/1     Running   0             46s
      web-demo-846b489584-wvv5s   1/1     Running   0             46s
    2. Run the following commands in sequence to view the files in the /data path of the pods:
      kubectl exec web-demo-846b489584-mjhm9 -- ls /data
      kubectl exec web-demo-846b489584-wvv5s -- ls /data

      If no result is returned for both pods, no file exists in the /data path.

  2. Run the following command to create a file named static in the /data path:

    kubectl exec web-demo-846b489584-mjhm9 --  touch /data/static

  3. Run the following command to view the files in the /data path:

    kubectl exec web-demo-846b489584-mjhm9 -- ls /data

    Expected output:

    static

  4. Verify data persistence.

    1. Run the following command to delete the pod named web-demo-846b489584-mjhm9:
      kubectl delete pod web-demo-846b489584-mjhm9

      Expected output:

      pod "web-demo-846b489584-mjhm9" deleted

      After the deletion, the Deployment controller automatically creates a replica.

    2. Run the following command to view the created pod:
      kubectl get pod | grep web-demo
      The expected output is as follows, in which web-demo-846b489584-d4d4j is the newly created pod:
      web-demo-846b489584-d4d4j   1/1     Running   0             110s
      web-demo-846b489584-wvv5s    1/1     Running   0             7m50s
    3. Run the following command to check whether the files in the /data path of the new pod have been modified:
      kubectl exec web-demo-846b489584-d4d4j -- ls /data

      Expected output:

      static

      If the static file still exists, the data can be stored persistently.

  5. Verify data sharing.

    1. Run the following command to view the created pod:
      kubectl get pod | grep web-demo
      Expected output:
      web-demo-846b489584-d4d4j   1/1     Running   0             7m
      web-demo-846b489584-wvv5s   1/1     Running   0             13m
    2. Run the following command to create a file named share in the /data path of either pod: In this example, select the pod named web-demo-846b489584-d4d4j.
      kubectl exec web-demo-846b489584-d4d4j --  touch /data/share
      Check the files in the /data path of the pod.
      kubectl exec web-demo-846b489584-d4d4j -- ls /data

      Expected output:

      share
      static
    3. Check whether the share file exists in the /data path of another pod (web-demo-846b489584-wvv5s) as well to verify data sharing.
      kubectl exec web-demo-846b489584-wvv5s -- ls /data

      Expected output:

      share
      static

      After you create a file in the /data path of a pod, if the file is also created in the /data path of another pods, the two pods share the same volume.

Related Operations

You can also perform the operations listed in Table 2.
Table 2 Related operations

Operation

Description

Procedure

Viewing events

You can view event names, event types, number of occurrences, Kubernetes events, first occurrence time, and last occurrence time of the PVC or PV.

  1. Choose Storage from the navigation pane, and click the PersistentVolumeClaims (PVCs) or PersistentVolumes (PVs) tab.
  2. Click View Events in the Operation column of the target PVC or PV to view events generated within one hour (event data is retained for one hour).

Viewing a YAML file

You can view, copy, and download the YAML files of a PVC or PV.

  1. Choose Storage from the navigation pane, and click the PersistentVolumeClaims (PVCs) or PersistentVolumes (PVs) tab.
  2. Click View YAML in the Operation column of the target PVC or PV to view or download the YAML.