- What's New
- Function Overview
- Service Overview
-
Billing
- Billing Overview
- Billing Modes
- Billed Items
- Billing Examples
- Billing Mode Changes
- Renewing Subscriptions
- Bills
- Arrears
- Billing Termination
- Cost Management
-
Billing FAQ
- How Do I Purchase SFS?
- How Do I Renew the Service?
- How Do I Check Whether the Subscriber Is in Arrears?
- Can I Purchase SFS Capacity-Oriented Resource Packages When I Still Have Valid Ones in Use?
- How Do I Check the Usage of an SFS Capacity-Oriented Resource Package?
- How Do I Adjust the Size of an SFS Capacity-Oriented Resource Package?
- Do SFS Capacity-Oriented and SFS Turbo Share One Resource Package?
- Getting Started
- User Guide
- Best Practices
-
API Reference
- Before You Start
- API Overview
- Calling APIs
- Calling General Purpose File System APIs
- Getting Started (SFS Capacity-Oriented)
- Getting Started with SFS Turbo
- Getting Started with General Purpose File System
-
SFS Capacity-Oriented APIs
- API Version Queries
- File Systems
- File System Access Rules
- Quota Management
- Expansion and Shrinking
-
Tag Management
- Adding a Tag to a Shared File System
- Deleting a Tag from a Shared File System
- Querying Tags of a Shared File System
- Querying Tags of All File Systems of a Tenant
- Batch Adding Tags to a Shared File System
- Batch Deleting Tags from a Shared File System
- Querying Shared File Systems by Tag
- Querying the Number of Shared File Systems by Tag
- AZ
-
SFS Turbo APIs
- Lifecycle Management
- Connection Management
- Tag Management
- Name Management
- File System Management
-
Storage Interworking Management
- Adding a Backend Target
- Querying Backend Targets
- Obtaining Details About a Backend Target
- Deleting a Backend Target
- Updating the Properties of a Storage Backend
- Updating the Auto Synchronization Policy of a Storage Backend
- Creating an Import or Export Task
- Querying Details About an Import or Export Task
- Listing Import and Export Tasks
- Deleting an Import or Export Task
- Updating a File System
- Directory Management
-
Permissions Management
- Creating a Permission Rule
- Querying Permission Rules of a File System
- Querying a Permission Rule of a File System
- Modifying a Permission Rule
- Deleting a Permissions Rule
- Creating and Binding the LDAP Configuration
- Querying the LDAP Configuration
- Modifying the LDAP Configuration
- Deleting the LDAP Configuration
- Task Management
- General Purpose File System APIs
- Permissions Policies and Supported Actions
- Common Parameters
- Appendix
- SDK Reference
-
Troubleshooting
- Mounting a File System Times Out
- Mounting a File System Fails
- File System Performance Is Poor
- Failed to Create an SFS Turbo File System
- A File System Is Automatically Disconnected from the Server
- A Server Fails to Access a File System
- The File System Is Abnormal
- Data Fails to Be Written into a File System Mounted to ECSs Running Different Types of Operating Systems
- Failed to Mount an NFS File System to a Windows IIS Server
- Writing to a File System Fails
- Error Message "wrong fs type, bad option" Is Displayed During File System Mounting
- Failed to Access the Shared Folder in Windows
-
FAQs
- Concepts
- Specifications
- Restrictions
- Networks
-
Billing
- How Do I Purchase SFS?
- How Do I Renew the Service?
- How Do I Check Whether the Subscriber Is in Arrears?
- Can I Purchase SFS Capacity-Oriented Resource Packages When I Still Have Valid Ones in Use?
- How Do I Check the Usage of an SFS Capacity-Oriented Resource Package?
- How Do I Adjust the Size of an SFS Capacity-Oriented Resource Package?
- Do SFS Capacity-Oriented and SFS Turbo Share One Resource Package?
-
Others
- How Do I Access a File System from a Server?
- How Do I Check Whether a File System on a Linux Server Is Available?
- What Resources Does SFS Occupy?
- Why Is the Capacity Displayed as 10P After I Mount My SFS Capacity-Oriented File System?
- Why the Capacity Is Displayed as 250TB After I Mount My General Purpose File System?
- How Can I Migrate Data Between SFS and OBS?
- Can a File System Be Accessed Across Multiple AZs?
- Can I Upgrade an SFS Capacity-Oriented File System to an SFS Turbo File System?
- Can I Upgrade an SFS Turbo File System from Standard to Standard-Enhanced?
- How Can I Migrate Data Between SFS and EVS?
- Can I Directly Access SFS from On-premises Devices?
- How Do I Delete .nfs Files?
- Why My File System Used Space Increases After I Migrate from SFS Capacity-Oriented to SFS Turbo?
- How Can I Improve the Copy and Delete Efficiency with an SFS Turbo File System?
- How Do Second- and Third-level Directory Permissions of an SFS Turbo File System Be Inherited?
- How Do I Deploy SFS Turbo on CCE?
- Videos
-
More Documents
- User Guide (ME-Abu Dhabi Region)
- API Reference (ME-Abu Dhabi Region)
-
User Guide (Paris Region)
- Introduction
- Getting Started
- Management
- Typical Applications
-
Troubleshooting
- Mounting a File System Times Out
- Mounting a File System Fails
- Failed to Create an SFS Turbo File System
- A File System Is Automatically Disconnected from the Server
- A Server Fails to Access a File System
- The File System Is Abnormal
- Data Fails to Be Written into a File System Mounted to ECSs Running Different Types of Operating Systems
- Failed to Mount an NFS File System to a Windows IIS Server
- Writing to a File System Fails
- Error Message "wrong fs type, bad option" Is Displayed During File System Mounting
- Failed to Access the Shared Folder in Windows
-
FAQs
- Concepts
- Specifications
- Restrictions
- Networks
-
Others
- How Do I Access a File System from a Server?
- How Do I Check Whether a File System on a Linux Server Is Available?
- What Resources Does SFS Occupy?
- Why Is the Capacity Displayed as 10P After I Mount My SFS Capacity-Oriented File System?
- Can a File System Be Accessed Across Multiple AZs?
- How Can I Migrate Data Between SFS and EVS?
- Can I Directly Access SFS from On-premises Devices?
- How Do I Delete .nfs Files?
- Why My File System Used Space Increases After I Migrate from SFS Capacity-Oriented to SFS Turbo?
- How Can I Improve the Copy and Delete Efficiency with an SFS Turbo File System?
- How Do Second- and Third-level Directory Permissions of an SFS Turbo File System Be Inherited?
- Other Operations
- Change History
- API Reference (Paris Region)
- User Guide (Kuala Lumpur Region)
- API Reference (Kuala Lumpur Region)
- Glossary
- General Reference
Copied.
Using the Internet to Migrate Data
Context
You can migrate data from a local NAS to SFS Turbo using the Internet.
In this solution, to migrate data from the local NAS to the cloud, a Linux server is created both on the cloud and on-premises. Inbound and outbound traffic is allowed on port 22 of these two servers. The on-premises server is used to access the local NAS, and the ECS is used to access SFS Turbo.
You can also refer to this solution to migrate data from an on-cloud NAS to SFS Turbo.
Limitations and Constraints
- Data cannot be migrated from the local NAS to SFS Capacity-Oriented using the Internet.
- Only Linux ECSs can be used to migrate data.
- The UID and GID of your file will no longer be consistent after data migration.
- The file access modes will no longer be consistent after data migration.
- Inbound and outbound traffic must be allowed on port 22.
- Incremental migration is supported, so that only changed data is migrated.
Prerequisites
- A Linux server has been created on the cloud and on-premises respectively.
- EIPs have been configured for the servers to ensure that the two servers can communicate with each other.
- You have created an SFS Turbo file system and have obtained the mount point of the file system.
- You have obtained the mount point of the local NAS.
Procedure
- Log in to the ECS console.
- Log in to the created on-premises server client1 and run the following command to access the local NAS:
mount -t nfs -o vers=3,timeo=600,noresvport,nolock Mount point of the local NAS /mnt/src
- Log in to the created Linux ECS client2 and run the following command to access the SFS Turbo file system:
mount -t nfs -o vers=3,timeo=600,noresvport,nolock Mount point of the SFS Turbo file system /mnt/dst
- Run the following commands on client1 to install the rclone tool:
wget https://downloads.rclone.org/v1.53.4/rclone-v1.53.4-linux-amd64.zip --no-check-certificate unzip rclone-v1.53.4-linux-amd64.zip chmod 0755 ./rclone-*/rclone cp ./rclone-*/rclone /usr/bin/ rm -rf ./rclone-*
- Run the following commands on client1 to configure the environment:
rclone config No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote name (New name) Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 24 / SSH/SFTP Connection \ "sftp" Storage> 24 (Select the SSH/SFTP number) SSH host to connect to Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Connect to example.com \ "example.com" host> ip address (IP address of client2) SSH username, leave blank for current username, root Enter a string value. Press Enter for the default (""). user> user name (Username of client2) SSH port, leave blank to use default (22) Enter a string value. Press Enter for the default (""). port> 22 SSH password, leave blank to use ssh-agent. y) Yes type in my own password g) Generate random password n) No leave this optional password blank y/g/n> y Enter the password: password: (Password for logging in to client2) Confirm the password: password: (Confirm the password for logging in to client2) Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. Enter a string value. Press Enter for the default (""). key_file> (Press Enter) The passphrase to decrypt the PEM-encoded private key file. Only PEM encrypted key files (old OpenSSH format) are supported. Encrypted keys in the new OpenSSH format can't be used. y) Yes type in my own password g) Generate random password n) No leave this optional password blank y/g/n> n When set forces the usage of the ssh-agent. When key-file is also set, the ".pub" file of the specified key-file is read and only the associated key is requested from the ssh-agent. This allows to avoid `Too many authentication failures for *username*` errors when the ssh-agent contains many keys. Enter a boolean value (true or false). Press Enter for the default ("false"). key_use_agent> (Press Enter) Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. Enter a boolean value (true or false). Press Enter for the default ("false"). Choose a number from below, or type in your own value 1 / Use default Cipher list. \ "false" 2 / Enables the use of the aes128-cbc cipher. \ "true" use_insecure_cipher> (Press Enter) Disable the execution of SSH commands to determine if remote file hashing is available. Leave blank or set to false to enable hashing (recommended), set to true to disable hashing. Enter a boolean value (true or false). Press Enter for the default ("false"). disable_hashcheck> Edit advanced config? (y/n) y) Yes n) No y/n> n Remote config ------------------- [remote_name] type = sftp host=(client2 ip) user=(client2 user name) port = 22 pass = *** ENCRYPTED *** key_file_pass = *** ENCRYPTED *** -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y Current remotes: Name Type ==== ==== remote_name sftp e) Edit existing remote n) New remote d) Delete remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config e/n/d/r/c/s/q> q
- Run the following command to view the rclone.conf file in /root/.config/rclone/rclone.conf:
cat /root/.config/rclone/rclone.conf [remote_name] type = sftp host=(client2 ip) user=(client2 user name) port = 22 pass = *** key_file_pass = ***
- Run the following command on client1 to synchronize data:
rclone copy /mnt/src remote_name:/mnt/dst -P --transfers 32 --checkers 64
NOTE:
- Replace remote_name in the command with the remote name in the environment.
- Set transfers and checkers based on the system specifications. The parameters are described as follows:
- transfers: number of files that can be transferred concurrently
- checkers: number of local files that can be scanned concurrently
- P: data copy progress
After data synchronization is complete, go to the SFS Turbo file system to check whether data is migrated.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot