Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive
Help Center/ Cloud Container Engine_Autopilot/ User Guide/ Storage/ Scalable File Service/ Using an SFS File System Through a Dynamic PV

Using an SFS File System Through a Dynamic PV

Updated on 2025-02-27 GMT+08:00

You can mount the PVs created from general-purpose file systems (formerly SFS 3.0) to pods in CCE Autopilot clusters for file storage. This section describes how to use storage classes to dynamically create PVs and PVCs for data persistence and sharing in workloads.

Prerequisites

Constraints

  • Not all regions support volumes created from file systems. View the regions where SFS volumes are supported on the console. You can also view Function Overview to learn about all regions where SFS volumes are supported.
  • If a general-purpose file system (formerly SFS 3.0) is used, the owner group and permission of the mount point cannot be modified.
  • If a general-purpose file system (formerly SFS 3.0) is used, there may be a latency when the PVCs or PVs are created or deleted. The billing duration depends on the time when the file systems are created or deleted on the SFS console.
  • If the reclamation policy of the volumes created from general-purpose file systems (formerly SFS 3.0) is set to Delete, PVs and PVCs can only be deleted after all files in the file systems are deleted manually.

Using the Console

  1. Log in to the CCE console and click the cluster name to access the cluster console.
  2. Dynamically create a PVC and PV.

    1. In the navigation pane on the left, choose Storage. Then click the PVCs tab. In the upper right corner, click Create PVC. In the displayed dialog box, configure the parameters.

      Parameter

      Description

      PVC Type

      In this example, select SFS.

      PVC Name

      Enter the PVC name, which must be unique in the same namespace.

      Creation Method

      • If no underlying storage is available, select Dynamically provision to create a PVC, PV, and underlying storage on the console in cascading mode.
      • If underlying storage is available, select either Use existing or Create new. For details about static creation, see Using an Existing File System Through a Static PV.

      In this example, select Dynamically provision.

      Storage Classes

      The storage class of SFS volumes is csi-sfs.

      Access Mode

      SFS volumes support only ReadWriteMany, indicating that a storage volume can be mounted to multiple pods in read/write mode. For details, see Volume Access Modes.

    2. Click Create to create a PVC and a PV.

      In the navigation pane on the left, choose Storage. View the created PVC and PV on the PVCs and PVs tabs, respectively.

  3. Create a workload.

    1. In the navigation pane on the left, choose Workloads. Then click the Deployments tab.
    2. In the upper right corner, click Create Workload. On the displayed page, click Data Storage in the Container Settings area and click Add Volume to select PVC.
      Table 1 describes the parameters for mounting the volume. For details about other parameters, see Creating a Workload.
      Table 1 Parameters for mounting a storage volume

      Parameter

      Description

      PVC

      Select an existing SFS volume.

      Mount Path

      Enter a mount path, for example, /tmp.

      This parameter indicates the container path that the volume will be mounted to. Do not mount the volume to a system directory such as / or /var/run. This may cause container errors. Mount the volume to an empty directory. If the directory is not empty, ensure that there are no files that affect container startup. If there are such files, they will be replaced, which will lead to a container startup or workload creation failure.
      NOTICE:

      If a volume is mounted to a high-risk directory, use an account with minimum permissions to start the container, or high-risk files on the host may be damaged.

      Subpath

      Enter a subpath, for example, tmp, indicating that data in the mount path of the container is stored in the tmp directory of the storage volume.

      A subpath is used to mount a local volume so that the same volume is used in a single pod. If this parameter is left blank, the root path is used by default.

      Permission

      • Read-only: You can only read the data in the mounted volume.
      • Read-write: You can modify the volume mounted to the path. Newly written data will not be migrated if the container is migrated, which may cause data loss.

      In this example, the volume is mounted to the /data path of the container. The container data generated in this path is stored in the SFS file system.

    3. Configure other parameters and click Create Workload.

      After the workload is created, the data in the container mount directory will be persistently stored. Verify the storage by referring to Verifying Data Persistence and Sharing.

Using kubectl

  1. Use kubectl to connect to the cluster.
  2. Use StorageClass to dynamically create a PVC and PV.

    1. Create the pvc-sfs-auto.yaml file.
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: pvc-sfs-auto
        namespace: default
        annotations: 
          everest.io/crypt-key-id: <your_key_id>      # (Optional) ID of the key for encrypting file systems
      
          everest.io/crypt-alias: sfs/default         # (Optional) Key name. Mandatory for encrypting volumes.
      
          everest.io/crypt-domain-id: <your_domain_id>   # (Optional) ID of the tenant to which an encrypted volume belongs. Mandatory for encrypting volumes.
      
      spec:
        accessModes:
          - ReadWriteMany             # The value must be ReadWriteMany for SFS.
        resources:
          requests:
            storage: 1Gi             # SFS volume capacity.
        storageClassName: csi-sfs    # The StorageClass of the SFS file system
      Table 2 Key parameters

      Parameter

      Mandatory

      Description

      storage

      Yes

      Requested capacity in the PVC, in Gi.

      For SFS, this field is used only for verification (cannot be empty or 0). Its value is fixed at 1, and any value you set does not take effect for SFS file systems.

      everest.io/crypt-key-id

      No

      This parameter is mandatory when an SFS system is encrypted. Enter the encryption key ID selected during SFS system creation. You can use a custom key or the default key named sfs/default.

      To obtain a key ID, log in to the DEW console, locate the key to be encrypted, and copy the key ID.

      everest.io/crypt-alias

      No

      Key name, which is mandatory when you create an encrypted volume.

      To obtain a key name, log in to the DEW console, locate the key to be encrypted, and copy the key name.

      everest.io/crypt-domain-id

      No

      ID of the tenant to which the encrypted volume belongs. This parameter is mandatory for creating an encrypted volume.

      To obtain a tenant ID, hover the cursor over the username in the upper right corner of the ECS console, choose My Credentials, and copy the account ID.

    2. Run the following command to create a PVC:
      kubectl apply -f pvc-sfs-auto.yaml

  3. Create a workload.

    1. Create a file named web-demo.yaml. In this example, the SFS volume is mounted to the /data path.
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: web-demo
        namespace: default
      spec:
        replicas: 2
        selector:
          matchLabels:
            app: web-demo
        template:
          metadata:
            labels:
              app: web-demo
          spec:
            containers:
            - name: container-1
              image: nginx:latest
              volumeMounts:
              - name: pvc-sfs-volume    # Volume name, which must be the same as the volume name in the volumes field.
                mountPath: /data  # Location where the storage volume is mounted.
            imagePullSecrets:
              - name: default-secret
            volumes:
              - name: pvc-sfs-volume    # Volume name, which can be changed as needed.
                persistentVolumeClaim:
                  claimName: pvc-sfs-auto    # PVC name.
    2. Run the following command to create a workload that the SFS volume is mounted to:
      kubectl apply -f web-demo.yaml

      After the workload is created, the data in the container mount directory will be persistently stored. Verify the storage by referring to Verifying Data Persistence and Sharing.

Verifying Data Persistence and Sharing

  1. View the deployed application and files.

    1. Run the following command to view the created pod:
      kubectl get pod | grep web-demo
      Expected output:
      web-demo-846b489584-mjhm9   1/1     Running   0             46s
      web-demo-846b489584-wvv5s   1/1     Running   0             46s
    2. Run the following commands in sequence to view the files in the /data path of the pods:
      kubectl exec web-demo-846b489584-mjhm9 -- ls /data
      kubectl exec web-demo-846b489584-wvv5s -- ls /data

      If no result is returned for both pods, no file exists in the /data path.

  2. Run the following command to create a file named static in the /data path:

    kubectl exec web-demo-846b489584-mjhm9 --  touch /data/static

  3. Run the following command to check the files in the /data path:

    kubectl exec web-demo-846b489584-mjhm9 -- ls /data

    Expected output:

    static

  4. Verify data persistence.

    1. Run the following command to delete the pod named web-demo-846b489584-mjhm9:
      kubectl delete pod web-demo-846b489584-mjhm9

      Expected output:

      pod "web-demo-846b489584-mjhm9" deleted

      After the deletion, the Deployment controller automatically creates a replica.

    2. Run the following command to view the created pod:
      kubectl get pod | grep web-demo
      The expected output is as follows, in which web-demo-846b489584-d4d4j is the newly created pod:
      web-demo-846b489584-d4d4j   1/1     Running   0             110s
      web-demo-846b489584-wvv5s    1/1     Running   0             7m50s
    3. Run the following command to check whether the files in the /data path of the new pod have been modified:
      kubectl exec web-demo-846b489584-d4d4j -- ls /data

      Expected output:

      static

      The static file is retained, indicating that the data in the file system can be stored persistently.

  5. Verify data sharing.

    1. Run the following command to view the created pod:
      kubectl get pod | grep web-demo
      Expected output:
      web-demo-846b489584-d4d4j   1/1     Running   0             7m
      web-demo-846b489584-wvv5s   1/1     Running   0             13m
    2. Run the following command to create a file named share in the /data path of either pod: In this example, select the pod named web-demo-846b489584-d4d4j.
      kubectl exec web-demo-846b489584-d4d4j --  touch /data/share
      Check the files in the /data path of the pod.
      kubectl exec web-demo-846b489584-d4d4j -- ls /data

      Expected output:

      share
      static
    3. Check whether the share file exists in the /data path of another pod (web-demo-846b489584-wvv5s) as well to verify data sharing.
      kubectl exec web-demo-846b489584-wvv5s -- ls /data

      Expected output:

      share
      static

      After you create a file in the /data path of a pod, if the file is also created in the /data path of the other pod, the two pods share the same volume.

Related Operations

You can also perform the operations described in Table 3.
Table 3 Related operations

Operation

Description

Procedure

Viewing events

You can view event names, event types, number of occurrences, Kubernetes events, first occurrence time, and last occurrence time of the PVC or PV.

  1. In the navigation pane on the left, choose Storage. Then click the PVCs or PVs tab.
  2. Locate the target PVC or PV, click View Events in the Operation column to view events generated within one hour (events are retained for one hour).

Viewing a YAML file

You can view, copy, and download the YAML files of a PVC or PV.

  1. In the navigation pane on the left, choose Storage. Then click the PVCs or PVs tab.
  2. Locate the target PVC or PV, click View YAML in the Operation column to view or download the YAML.

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback