Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Situation Awareness
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive
Help Center/ Cloud Container Engine/ User Guide/ Storage/ Scalable File Service/ Migrating Containerized Application Data from SFS 1.0 to SFS 3.0 or SFS Turbo

Migrating Containerized Application Data from SFS 1.0 to SFS 3.0 or SFS Turbo

Updated on 2024-11-12 GMT+08:00

Scalable File Service (SFS) provides the following types of file systems: SFS 1.0 (Capacity-Oriented), SFS 3.0 (General Purpose), and SFS Turbo. For details, see File System Types.

In earlier versions, CCE supports mounting SFS 1.0 to workloads. You are advised to migrate workloads to SFS 3.0 or SFS Turbo.

You can select a proper storage mounting mode according to your workload types. Dynamic mounting and static mounting are distinguished by mounting storage volumes to workloads.

  • Dynamic mounting applies to only StatefulSets. This function is implemented using the volumeClaimTemplates field and depends on dynamic creation of PVs through StorageClass. A StatefulSet associates each pod with a PVC using the volumeClaimTemplates field, and the PVC is bound to the corresponding PV. Therefore, after the pod is rescheduled, the original data can still be mounted based on the PVC name.
  • Static mounting applies to all types of workloads. This function is implemented using the volumes field.
NOTICE:

To migrate containerized applications from SFS 1.0 to SFS 3.0 or SFS Turbo, the procedure is the same. However, the only difference is that SFS Turbo does not allow dynamic creation, which affects StatefulSets that rely on it.

Notes and Constraints

Migrating Data from Statically Mounted Storage

Static mounting applies to all types of workloads. This function is implemented using the volumes field. The procedure for migrating storage data using this mounting method from SFS 1.0 to SFS 3.0 or SFS Turbo is the same. This section uses SFS Turbo as an example.

  1. Access the cluster console and choose Storage in the navigation pane. In the right pane, click the PVs tab. Then, click Create PersistentVolume in the upper right corner.

    Configure the following key parameters:
    • Volume Type: Select SFS Turbo.
    • SFS Turbo: Select the target SFS Turbo volume for data migration.
    • PV Name: Enter a custom name.
    • Access Mode: Select ReadWriteMany.
    • Reclaim Policy: Select Retain.

  2. After the PV is created, switch to the PVCs and click Create PVC in the upper right corner.

    Configure the following parameters:

    • PVC Type: Select SFS Turbo.
    • PVC Name: Enter a custom name, which must be different from the original SFS 1.0 PVC name.
    • Creation Method: Select Use existing.
    • PV: Select the volume created in the previous step.

    After the PVC is created, the following figure is displayed.

  3. Choose Workloads in the navigation pane. On the displayed page, locate the target workload and reduce the number of pods to 0.

  4. Click Upgrade in the Operation column of the workload. In Container Settings, switch to the Data Storage tab page and choose PVC from the drop-down list. In the displayed PVC area, select the PVC created in step 2 to replace the PVC used by the workload.

    NOTICE:

    After the migration, ensure that the mount path and subpath in the container are the same as those when SFS 1.0 is mounted.

  5. After the migration, you can increase the number of pods.

    Check that the new volume works properly, and clear the SFS 1.0 storage volume on CCE.

Migrating Data from Dynamically Mounted Storage Used by a StatefulSet

This section describes how to migrate data from dynamically mounted storage used by a StatefulSet from SFS 1.0 to SFS 3.0 or SFS Turbo.

NOTICE:
  • The automatic scale-out capability of dynamic mounting is supported only by StatefulSets.
  • SFS Turbo does not support dynamic provision. Therefore, after data of a StatefulSet is migrated from SFS 1.0 to SFS Turbo, the StatefulSet does not support the automatic scale-out capability of dynamic mounting.
  1. Choose Workloads in the navigation pane of the cluster console. Switch to the StatefulSets tab page, record the number of pods of the target workload, and reduce the number of pods to 0.

    NOTE:

    Perform steps 2 to 6 for the PVC used by each pod.

  2. Check the PVC mounting mode of the StatefulSet.

    kubectl get statefulset {statefulset-name} -n {namespace} -ojsonpath='{range .items[*]}{.spec.volumeClaimTemplates}{"\n"}{end}'
    • If any command output is displayed, dynamic mounting is used. In this case, go to 3 through 7.
    • If there is no command output, dynamic mounting is not used. In this case, skip these steps and refer to Migrating Data from Statically Mounted Storage.

  3. Run the following command to change the persistentVolumeReclaimPolicy value of the PV corresponding to the PVC used by the pod from Delete to Retain:

    kubectl edit pv {pv-name} -n {namespace}

    Check the result.

    kubectl get pv {pv-name} -n {namespace} -o yaml |grep persistentVolumeReclaimPolicy

    Example:

     # kubectl get pv pvc-29467e4a-0120-4698-a147-5b75f0ae9a43 -o yaml |grep persistentVolumeReclaimPolicy 
       persistentVolumeReclaimPolicy: Retain 

  4. On the Storage page, click the PVs tab. Then, record the PVC name corresponding to the SFS 1.0 PV and delete the PVC. The PV is in the Released state.

  5. Click Create PersistentVolume in the upper right corner and configure the following parameters:

    • Volume Type: Select SFS Turbo.
    • SFS Turbo: Select the target SFS Turbo volume for data migration.
    • PV Name: Enter a custom name.
    • Access Mode: Select ReadWriteMany.
    • Reclaim Policy: Select Retain.

  6. Choose Storage in the navigation pane. In the right pane, click the PVCs tab. Click Create PVC in the upper right corner, create a PVC with the same name as that in 4, and bind the PVC to the SFS Turbo volume created in the previous step.

  7. After the PVCs corresponding to all pods are migrated, expand the number of pods to the original number.

    After confirmation, go to the SFS console to delete the corresponding SFS 1.0 volume and delete the PV corresponding to SFS 1.0 on the CCE console.

NOTICE:
  • The automatic scale-out capability of dynamic mounting is supported only by StatefulSets.
  • After a StatefulSet is migrated from SFS 1.0 to SFS 3.0, it still supports automatic scale-out.

To enable StatefulSets to support dynamic scale-out after the migration, change the StorageClass used by volumeClaimTemplates in StatefulSets from csi-nas to csi-sfs. The volumeClaimTemplates of stateful applications that use dynamic mounting cannot be modified. Therefore, delete the stateful applications and then rebuild them. During the process, ensure that the configurations, including the number of pods, are the same as those before the migration.

  1. Choose Workloads in the navigation pane. On the displayed page, locate the target workload, record the number of pods of the workload, and reduce the number of pods to 0.

    NOTE:

    Perform steps 2 to 6 for the PVC used by each pod.

  2. Check the PVC mounting mode of the StatefulSet.

    kubectl get statefulset {statefulset-name} -n {namespace} -ojsonpath='{range .items[*]}{.spec.volumeClaimTemplates}{"\n"}{end}'
    • If any command output is displayed, dynamic mounting is used. In this case, go to 3 through 7.
    • If there is no command output, dynamic mounting is not used. In this case, skip these steps and refer to Migrating Data from Statically Mounted Storage.

  3. Run the following command to change the persistentVolumeReclaimPolicy value of the SFS 1.0 PV from Delete to Retain:

    kubectl edit pv {pv-name} -n {namespace}

    Check the result.

    kubectl get pv {pv-name} -n {namespace} -o yaml |grep persistentVolumeReclaimPolicy

    Example:

     # kubectl get pv pvc-29467e4a-0120-4698-a147-5b75f0ae9a43 -o yaml |grep persistentVolumeReclaimPolicy 
       persistentVolumeReclaimPolicy: Retain 

  4. On the Storage page, click the PVs tab. Then, record the PVC name corresponding to the SFS 1.0 PV and delete the PVC. The PV is in the Released state.

  5. Click Create PersistentVolume in the upper right corner and configure the following parameters:

    • Volume Type: Select SFS.
    • SFS: Select the SFS 3.0 storage volume after data migration.
    • PV Name: Enter a custom name.
    • Access Mode: Select ReadWriteMany.
    • Reclaim Policy: Set this parameter as required.
      • Delete: The PV will be removed from Kubernetes, and the associated storage assets will also be removed from the external infrastructure.
      • Retain: When the PVC is deleted, the PV will be retained, and the target data volume is marked released.

  6. Choose Storage in the navigation pane. In the right pane, click the PVCs tab. Click Create PVC in the upper right corner, create a PVC with the same name, and bind the PVC to the SFS 3.0 volume created in the previous step.

    • PVC Type: Select SFS.
    • PVC Name: Change it to the PVC with the same name in step 4.
    • Creation Method: Select Use existing.
    • PV: Select the volume created in the previous step.

  7. Go to the Workloads page to view the original stateful application. Choose More > Edit YAML, and click Download or copy all content of the YAML file to back up the file locally.
  8. Delete the original stateful application and modify the copied YAML configuration as follows:

    • Change the value of storageClassName from csi-nas to csi-sfs.
    • The resourceVersion field and its parameters are deleted because this field cannot be specified during creation.

  9. Click Create from YAML in the upper right corner, click Import or paste the modified YAML file content, and click OK.
  10. After the workload is created, scale-out the number of pods to the original number.

    After confirmation, go to the SFS console to delete the corresponding SFS 1.0 volume and delete the PV corresponding to SFS 1.0 on the CCE console.

NOTICE:
  • The automatic scale-out capability of dynamic mounting is supported only by StatefulSets.
  • SFS Turbo does not support dynamic provision. Therefore, after data of a StatefulSet is migrated from SFS 1.0 to SFS Turbo, the StatefulSet does not support the automatic scale-out capability of dynamic mounting.
  1. Choose Workloads in the navigation pane of the cluster console. Switch to the StatefulSets tab page, record the number of pods of the target workload, and reduce the number of pods to 0.

    NOTE:

    Perform steps 2 to 6 for the PVC used by each pod.

  2. Check the PVC mounting mode of the StatefulSet.

    kubectl get statefulset {statefulset-name} -n {namespace} -ojsonpath='{range .items[*]}{.spec.volumeClaimTemplates}{"\n"}{end}'
    • If any command output is displayed, dynamic mounting is used. In this case, go to 3 through 7.
    • If there is no command output, dynamic mounting is not used. In this case, skip these steps and refer to Migrating Data from Statically Mounted Storage.

  3. Run the following command to change the persistentVolumeReclaimPolicy value of the PV corresponding to the PVC used by the pod from Delete to Retain:

    kubectl edit pv {pv-name} -n {namespace}

    Check the result.

    kubectl get pv {pv-name} -n {namespace} -o yaml |grep persistentVolumeReclaimPolicy

    Example:

     # kubectl get pv pvc-29467e4a-0120-4698-a147-5b75f0ae9a43 -o yaml |grep persistentVolumeReclaimPolicy 
       persistentVolumeReclaimPolicy: Retain 

  4. On the Storage page, click the PVs tab. Then, record the PVC name corresponding to the SFS 1.0 PV and delete the PVC. The PV is in the Released state.

  5. Click Create PersistentVolume in the upper right corner and configure the following parameters:

    • Volume Type: Select SFS Turbo.
    • SFS Turbo: Select the target SFS Turbo volume for data migration.
    • PV Name: Enter a custom name.
    • Access Mode: Select ReadWriteMany.
    • Reclaim Policy: Select Retain.

  6. Choose Storage in the navigation pane. In the right pane, click the PVCs tab. Click Create PVC in the upper right corner, create a PVC with the same name as that in 4, and bind the PVC to the SFS Turbo volume created in the previous step.

  7. After the PVCs corresponding to all pods are migrated, expand the number of pods to the original number.

    After confirmation, go to the SFS console to delete the corresponding SFS 1.0 volume and delete the PV corresponding to SFS 1.0 on the CCE console.

NOTICE:
  • The automatic scale-out capability of dynamic mounting is supported only by StatefulSets.
  • After a StatefulSet is migrated from SFS 1.0 to SFS 3.0, it still supports automatic scale-out.

To enable StatefulSets to support dynamic scale-out after the migration, change the StorageClass used by volumeClaimTemplates in StatefulSets from csi-nas to csi-sfs. The volumeClaimTemplates of stateful applications that use dynamic mounting cannot be modified. Therefore, delete the stateful applications and then rebuild them. During the process, ensure that the configurations, including the number of pods, are the same as those before the migration.

  1. Choose Workloads in the navigation pane. On the displayed page, locate the target workload, record the number of pods of the workload, and reduce the number of pods to 0.

    NOTE:

    Perform steps 2 to 6 for the PVC used by each pod.

  2. Check the PVC mounting mode of the StatefulSet.

    kubectl get statefulset {statefulset-name} -n {namespace} -ojsonpath='{range .items[*]}{.spec.volumeClaimTemplates}{"\n"}{end}'
    • If any command output is displayed, dynamic mounting is used. In this case, go to 3 through 7.
    • If there is no command output, dynamic mounting is not used. In this case, skip these steps and refer to Migrating Data from Statically Mounted Storage.

  3. Run the following command to change the persistentVolumeReclaimPolicy value of the SFS 1.0 PV from Delete to Retain:

    kubectl edit pv {pv-name} -n {namespace}

    Check the result.

    kubectl get pv {pv-name} -n {namespace} -o yaml |grep persistentVolumeReclaimPolicy

    Example:

     # kubectl get pv pvc-29467e4a-0120-4698-a147-5b75f0ae9a43 -o yaml |grep persistentVolumeReclaimPolicy 
       persistentVolumeReclaimPolicy: Retain 

  4. On the Storage page, click the PVs tab. Then, record the PVC name corresponding to the SFS 1.0 PV and delete the PVC. The PV is in the Released state.

  5. Click Create PersistentVolume in the upper right corner and configure the following parameters:

    • Volume Type: Select SFS.
    • SFS: Select the SFS 3.0 storage volume after data migration.
    • PV Name: Enter a custom name.
    • Access Mode: Select ReadWriteMany.
    • Reclaim Policy: Set this parameter as required.
      • Delete: The PV will be removed from Kubernetes, and the associated storage assets will also be removed from the external infrastructure.
      • Retain: When the PVC is deleted, the PV will be retained, and the target data volume is marked released.

  6. Choose Storage in the navigation pane. In the right pane, click the PVCs tab. Click Create PVC in the upper right corner, create a PVC with the same name, and bind the PVC to the SFS 3.0 volume created in the previous step.

    • PVC Type: Select SFS.
    • PVC Name: Change it to the PVC with the same name in step 4.
    • Creation Method: Select Use existing.
    • PV: Select the volume created in the previous step.

  7. Go to the Workloads page to view the original stateful application. Choose More > Edit YAML, and click Download or copy all content of the YAML file to back up the file locally.
  8. Delete the original stateful application and modify the copied YAML configuration as follows:

    • Change the value of storageClassName from csi-nas to csi-sfs.
    • The resourceVersion field and its parameters are deleted because this field cannot be specified during creation.

  9. Click Create from YAML in the upper right corner, click Import or paste the modified YAML file content, and click OK.
  10. After the workload is created, scale-out the number of pods to the original number.

    After confirmation, go to the SFS console to delete the corresponding SFS 1.0 volume and delete the PV corresponding to SFS 1.0 on the CCE console.

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback