- What's New
- Function Overview
- Product Bulletin
- Service Overview
- Billing
- Getting Started
- User Guide
- Best Practices
-
API Reference
- Before You Start
- API Overview
- Calling APIs
- Getting Started
-
API
- Lifecycle Management
- Connection Management
- Tag Management
- Name Management
- File System Management
-
Storage Interworking Management
- Adding a Backend Target
- Querying Backend Targets
- Obtaining Details About a Backend Target
- Deleting a Backend Target
- Updating the Properties of a Storage Backend
- Updating the Auto Synchronization Policy of a Storage Backend
- Creating an Import or Export Task
- Querying Details About an Import or Export Task
- Listing Import and Export Tasks
- Deleting an Import or Export Task
- Updating a File System
- Directory Management
-
Permissions Management
- Creating a Permission Rule
- Querying Permission Rules of a File System
- Querying a Permission Rule of a File System
- Modifying a Permission Rule
- Deleting a Permissions Rule
- Creating and Binding the LDAP Configuration
- Querying the LDAP Configuration
- Modifying the LDAP Configuration
- Deleting the LDAP Configuration
- Task Management
- Permissions Policies and Supported Actions
- Common Parameters
- Appendix
- SDK Reference
-
FAQs
- SFS Turbo Concepts
- SFS Turbo Specifications
- SFS Turbo Billing
-
SFS Turbo Mount
- What Can I Do If Data of My SFS Turbo File System Is Not the Same When Accessed from Two Client Servers?
- Can I Mount an SFS Turbo File System Across Regions?
- Can I Mount an SFS Turbo File System Across Accounts?
- How Many Cloud Servers Can I Mount an SFS Turbo File System To?
- How Do I Mount a File System to a Linux ECS as a Non-root User
- What Can I Do If Mounting a Subdirectory of a File System Failed?
- SFS Turbo Access
- SFS Turbo Capacity Expansion
- SFS Turbo Deletion
- SFS Turbo Migration
- SFS Turbo Performance
-
Others
- Does the Security Group of a VPC Affect the Use of SFS Turbo?
- What Resources Does SFS Turbo Occupy?
- How Do I Check Whether an SFS Turbo File System Is Available on a Linux Server?
- Can I Upgrade an SFS Turbo File System from the Standard Type to the Performance Type?
- Does SFS Turbo File Systems Support Multi-AZ Deployment?
-
Troubleshooting
- Mounting a File System Timed Out
- Mounting a File System Failed
- File System Performance Was Poor
- Creating an SFS Turbo File System Failed
- File System Automatically Unmounted
- A Client Server Failed to Access a File System
- Abnormal File System Status
- Data Fails to Be Written into a File System Mounted to ECSs Running Different Types of Operating Systems
- Writing to a File System Failed
- Error Message "wrong fs type, bad option" Was Displayed During File System Mounting
- General Reference
Copied.
Migrating Local Data to SFS Turbo over the Internet
Context
You can migrate data from a local NAS to SFS Turbo over the Internet.
In this solution, to migrate data from the local NAS to the cloud, a Linux server is created both on the cloud and on-premises. The on-premises server is used to access the local NAS, and the cloud server (ECS) is used to access SFS Turbo. Inbound and outbound traffic is allowed on port 22 of the two servers.
You can also refer to this solution to migrate data from an on-cloud NAS to SFS Turbo.
Notes and Constraints
- Only Linux ECSs can be used to migrate data.
- The UID and GID of your file will no longer be consistent after data migration.
- The file access modes will no longer be consistent after data migration.
- Inbound and outbound traffic must be allowed on port 22.
- Incremental migration is supported, so you can only migrate the changed data.
Prerequisites
- A Linux server has been created on the cloud and on-premises respectively.
- An EIP has been bound to the ECS to ensure that the two servers can communicate with each other.
- You have created an SFS Turbo file system and obtained its shared path.
- You have obtained the shared path of the local NAS.
Procedure
- Log in to the ECS console.
- Log in to the on-premises server client1 and run the following command to mount the local NAS:
mount -t nfs -o vers=3,timeo=600,noresvport,nolock,tcp <Shared path of the local NAS> /mnt/src
- Log in to the Linux ECS client2 and run the following command to mount the SFS Turbo file system:
mount -t nfs -o vers=3,timeo=600,noresvport,nolock,tcp <Shared path of the SFS Turbo file system> /mnt/dst
- Install rclone on client 1.
wget https://downloads.rclone.org/v1.53.4/rclone-v1.53.4-linux-amd64.zip --no-check-certificate unzip rclone-v1.53.4-linux-amd64.zip chmod 0755 ./rclone-*/rclone cp ./rclone-*/rclone /usr/bin/ rm -rf ./rclone-*
- Configure the environment on client1.
rclone config No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote name (New name) Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 24 / SSH/SFTP Connection \ "sftp" Storage> 24 (Select the SSH/SFTP number) SSH host to connect to Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Connect to example.com \ "example.com" host> ip address (IP address of client2) SSH username, leave blank for current username, root Enter a string value. Press Enter for the default (""). user> user name (Username of client2) SSH port, leave blank to use default (22) Enter a string value. Press Enter for the default (""). port> 22 SSH password, leave blank to use ssh-agent. y) Yes type in my own password g) Generate random password n) No leave this optional password blank y/g/n> y Enter the password: password: (Password for logging in to client2) Confirm the password: password: (Confirm the password for logging in to client2) Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. Enter a string value. Press Enter for the default (""). key_file> (Press Enter) The passphrase to decrypt the PEM-encoded private key file. Only PEM encrypted key files (old OpenSSH format) are supported. Encrypted keys in the new OpenSSH format can't be used. y) Yes type in my own password g) Generate random password n) No leave this optional password blank y/g/n> n When set forces the usage of the ssh-agent. When key-file is also set, the ".pub" file of the specified key-file is read and only the associated key is requested from the ssh-agent. This allows to avoid `Too many authentication failures for *username*` errors when the ssh-agent contains many keys. Enter a boolean value (true or false). Press Enter for the default ("false"). key_use_agent> (Press Enter) Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. Enter a boolean value (true or false). Press Enter for the default ("false"). Choose a number from below, or type in your own value 1 / Use default Cipher list. \ "false" 2 / Enables the use of the aes128-cbc cipher. \ "true" use_insecure_cipher> (Press Enter) Disable the execution of SSH commands to determine if remote file hashing is available. Leave blank or set to false to enable hashing (recommended), set to true to disable hashing. Enter a boolean value (true or false). Press Enter for the default ("false"). disable_hashcheck> Edit advanced config? (y/n) y) Yes n) No y/n> n Remote config ------------------- [remote_name] type = sftp host=(client2 ip) user=(client2 user name) port = 22 pass = *** ENCRYPTED *** key_file_pass = *** ENCRYPTED *** -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y Current remotes: Name Type ==== ==== remote_name sftp e) Edit existing remote n) New remote d) Delete remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config e/n/d/r/c/s/q> q
- View the rclone.conf file in /root/.config/rclone/rclone.conf.
cat /root/.config/rclone/rclone.conf [remote_name] type = sftp host=(client2 ip) user=(client2 user name) port = 22 pass = *** key_file_pass = ***
- Run the following command on client1 to synchronize data:
rclone copy /mnt/src <remote_name>:/mnt/dst -P --transfers 32 --checkers 64
NOTE:
- Replace <remote_name> in the command with the remote name in the environment.
- The parameters are described as follows. Set transfers and checkers based on the system specifications.
- transfers: number of files that can be transferred concurrently
- checkers: number of local files that can be scanned concurrently
- P: data copy progress
After data synchronization is complete, go to the SFS Turbo file system to check whether data is migrated.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot