Help Center/ Scalable File Service Turbo/ Best Practices/ Migrating Data to SFS Turbo/ Cross-Server Data Migration to SFS Turbo (rclone)
Updated on 2025-07-28 GMT+08:00

Cross-Server Data Migration to SFS Turbo (rclone)

Solution Overview

You can use rclone to migrate data from a local NAS to SFS Turbo over the Internet or private network.

In this solution, to migrate data from the local NAS to the cloud, a Linux server is created both on the cloud and on-premises. Inbound and outbound traffic is allowed on port 22 of the two servers. The on-premises server is used to access the local NAS, and the ECS is used to access SFS Turbo.

You can also refer to this solution to migrate data from an on-cloud NAS to SFS Turbo over the Internet or private network.

Constraints

  • Only Linux ECSs can be used to migrate data.
  • The UID and GID of your file will no longer be consistent after data migration.
  • The file access modes will no longer be consistent after data migration.
  • Inbound and outbound traffic must be allowed on port 22.
  • Incremental migration is supported, so you can only migrate the changed data.
  • If data is written to the file system after you have run the rclone command to migrate data, data inconsistency may occur.

Prerequisites

  • A Linux server has been created on the cloud and on-premises respectively.
  • An elastic IP address (EIP) has been bound to the ECS to ensure that the two servers can communicate with each other.
  • You have created an SFS Turbo file system and obtained its shared path.
  • You have obtained the shared path of the local NAS.

Resource Planning

Table 1 describes the resource planning in this solution.

Table 1 Resource planning

Resource

Example Configuration

Description

ECS

Specifications: 8 vCPUs | 16 GB | c7.2xlarge.2

OS: Linux

Region: CN-Hong Kong

VPC: VPC1

Enabled port: 22

EIP: xxx.xxx.xxx.xxx

Ensure that the /mnt/dst directory has been created.

Procedure

  1. Log in to the ECS console.
  2. Log in to the on-premises server client1 and run the following command to mount the local NAS:

    Replace <shared-path-of-the-local-NAS> with the actual NAS address, for example, 192.168.0.0:/. Replace /mnt/src with the actual source path.
    mount -t nfs -o vers=3,timeo=600,noresvport,nolock,tcp <shared-path-of-the-local-NAS> /mnt/src

  3. Log in to the Linux ECS client2 and run the following command to mount the SFS Turbo file system:

    Replace <shared-path-of-the-SFS-Turbo-file-system> with the actual SFS Turbo file system address, for example, 192.168.0.0:/. Replace /mnt/dst with the actual destination path.
    mount -t nfs -o vers=3,timeo=600,noresvport,nolock,tcp <shared-path-of-the-SFS-Turbo-file-system> /mnt/dst

  4. Install rclone on client 1.

    wget https://downloads.rclone.org/v1.53.4/rclone-v1.53.4-linux-amd64.zip --no-check-certificate
    unzip rclone-v1.53.4-linux-amd64.zip
    chmod 755 ./rclone-*/rclone
    cp ./rclone-*/rclone /usr/bin/
    rm -rf ./rclone-*

  5. Configure the environment on client1.

    rclone config
    No remotes found - make a new one
    n) New remote
    s) Set configuration password
    q) Quit config
    n/s/q> n
    name> remote name (New name)
    Type of storage to configure.
    Enter a string value. Press Enter for the default ("").
    Choose a number from below, or type in your own value
    24 / SSH/SFTP Connection
       \ "sftp"
    Storage> 24 (Select the SSH/SFTP number)
    SSH host to connect to
    Enter a string value. Press Enter for the default ("").
    Choose a number from below, or type in your own value
     1 / Connect to example.com
       \ "example.com"
    host> ip address (IP address of client2)
    SSH username, leave blank for current username, root
    Enter a string value. Press Enter for the default ("").
    user> user name (Username of client2)
    SSH port, leave blank to use default (22)
    Enter a string value. Press Enter for the default ("").
    port> 22
    SSH password, leave blank to use ssh-agent.
    y) Yes type in my own password
    g) Generate random password
    n) No leave this optional password blank
    y/g/n> y
    Enter the password:
    password: (Password for logging in to client2)
    Confirm the password:
    password: (Confirm the password)
    Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
    Enter a string value. Press Enter for the default ("").
    key_file> (Press Enter)
    The passphrase to decrypt the PEM-encoded private key file.
     
    Only PEM encrypted key files (old OpenSSH format) are supported. Encrypted keys
    in the new OpenSSH format can't be used.
    y) Yes type in my own password
    g) Generate random password
    n) No leave this optional password blank
    y/g/n> n
    When set forces the usage of the ssh-agent.
    When key-file is also set, the ".pub" file of the specified key-file is read and only the associated key is
    requested from the ssh-agent. This allows to avoid `Too many authentication failures for *username*` errors
    when the ssh-agent contains many keys.
    Enter a boolean value (true or false). Press Enter for the default ("false").
    key_use_agent> (Press Enter)
    Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
    Enter a boolean value (true or false). Press Enter for the default ("false").
    Choose a number from below, or type in your own value
     1 / Use default Cipher list.
       \ "false"
     2 / Enables the use of the aes128-cbc cipher.
       \ "true"
    use_insecure_cipher> (Press Enter)
    Disable the execution of SSH commands to determine if remote file hashing is available.
    Leave blank or set to false to enable hashing (recommended), set to true to disable hashing.
    Enter a boolean value (true or false). Press Enter for the default ("false").
    disable_hashcheck> 
    Edit advanced config? (y/n)
    y) Yes
    n) No
    y/n> n
    Remote config
    -------------------
    [remote_name] 
    type = sftp
    host=(client2 ip)
    user=(client2 user name)
    port = 22
    pass = *** ENCRYPTED ***
    key_file_pass = *** ENCRYPTED ***
    --------------------
    y) Yes this is OK
    e) Edit this remote
    d) Delete this remote
    y/e/d> y
    Current remotes:
     
    Name                 Type
    ====                 ====
    remote_name          sftp 
     
    e) Edit existing remote
    n) New remote
    d) Delete remote
    r) Rename remote
    c) Copy remote
    s) Set configuration password
    q) Quit config
    e/n/d/r/c/s/q> q

    Enter a public IP address for IP address of client2.

  6. View the rclone.conf file in /root/.config/rclone/rclone.conf.

    cat /root/.config/rclone/rclone.conf
    [remote_name]
    type = sftp
    host=(client2 ip)
    user=(client2 user name)
    port = 22
    pass = ***
    key_file_pass = ***

  7. Run the following command on client1 to synchronize data:

    rclone copy /mnt/src <remote_name>:/mnt/dst -P --transfers 32 --checkers 64
    • Replace <remote_name> in the command with the remote name in the environment.
    • The parameters are described as follows. Set transfers and checkers based on the system specifications.
      • transfers: number of files that can be transferred concurrently
      • checkers: number of local files that can be scanned concurrently
      • P: data copy progress

    After data synchronization is complete, go to the SFS Turbo file system to check whether data is migrated.

Verification

  1. Log in to the Linux ECS.
  2. Check the file synchronization results on the destination server. This command only compares the file quantity and size. If the file quantity and size are the same as the source, it is considered that data is migrated successfully.

    rclone check /mnt/src /mnt/dst --size-only --checkers 64

  3. (Optional) To check data consistency, use the following command to compare the hash values. If the hash values are the same, data is migrated successfully.

    rclone check /mnt/src /mnt/dst --checksum --checkers 64