Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Situation Awareness
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive

Cross-server Migration (rclone)

Updated on 2023-06-21 GMT+08:00

Solution Overview

You can use rclone to migrate data from a local NAS to SFS Turbo over the Internet or private network.

In this solution, to migrate data from the local NAS to the cloud, a Linux server is created both on the cloud and on-premises. Inbound and outbound traffic is allowed on port 22 of these two servers. The on-premises server is used to access the local NAS, and the ECS is used to access SFS Turbo.

You can also refer to this solution to migrate data from an on-cloud NAS to SFS Turbo over the Internet or private network.

Limitations and Constraints

  • Data cannot be migrated from the local NAS to SFS Capacity-Oriented using the Internet.
  • Only Linux ECSs can be used to migrate data.
  • The UID and GID of your file will no longer be consistent after data migration.
  • The file access modes will no longer be consistent after data migration.
  • Inbound and outbound traffic must be allowed on port 22.
  • Incremental migration is supported, so that only changed data is migrated.
  • If data is written to the file system after you have run the rclone command to migrate data, data inconsistency may occur.

Prerequisites

  • A Linux server has been created on the cloud and on-premises respectively.
  • Elastic IP addresses (EIPs) have been configured for the servers to ensure that the two servers can communicate with each other.
  • You have created an SFS Turbo file system and have obtained the mount point of the file system.
  • You have obtained the mount point of the local NAS.

Resource Planning

Table 1 describes the resource planning in this solution.

Table 1 Resource planning

Resource

Example Configuration

Description

ECS

Specifications: 8 vCPUs | 16 GB | c7.2xlarge.2

OS: Linux

Region: CN-Hong Kong

VPC: VPC1

Enabled port: 22

EIP: xxx.xxx.xxx.xxx

Ensure that the /mnt/dst directory has been created.

Procedure

  1. Log in to the ECS console.
  2. Log in to the created on-premises server client1 and run the following command to access the local NAS:

    mount -t nfs -o vers=3,timeo=600,noresvport,nolock Mount point of the local NAS /mnt/src

  3. Log in to the created Linux ECS client2 and run the following command to access the SFS Turbo file system:

    mount -t nfs -o vers=3,timeo=600,noresvport,nolock Mount point of the SFS Turbo file system  /mnt/dst

  4. Run the following commands on client1 to install the rclone tool:

    wget https://downloads.rclone.org/v1.53.4/rclone-v1.53.4-linux-amd64.zip --no-check-certificate
    unzip rclone-v1.53.4-linux-amd64.zip
    chmod 0755 ./rclone-*/rclone
    cp ./rclone-*/rclone /usr/bin/
    rm -rf ./rclone-*

  5. Run the following commands on client1 to configure the environment:

    rclone config
    No remotes found - make a new one
    n) New remote
    s) Set configuration password
    q) Quit config
    n/s/q> n
    name> remote name (New name)
    Type of storage to configure.
    Enter a string value. Press Enter for the default ("").
    Choose a number from below, or type in your own value
    24 / SSH/SFTP Connection
       \ "sftp"
    Storage> 24 (Select the SSH/SFTP number)
    SSH host to connect to
    Enter a string value. Press Enter for the default ("").
    Choose a number from below, or type in your own value
     1 / Connect to example.com
       \ "example.com"
    host> ip address (IP address of client2)
    SSH username, leave blank for current username, root
    Enter a string value. Press Enter for the default ("").
    user> user name (Username of client2)
    SSH port, leave blank to use default (22)
    Enter a string value. Press Enter for the default ("").
    port> 22
    SSH password, leave blank to use ssh-agent.
    y) Yes type in my own password
    g) Generate random password
    n) No leave this optional password blank
    y/g/n> y
    Enter the password:
    password: (Password for logging in to client2)
    Confirm the password:
    password: (Confirm the password)
    Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
    Enter a string value. Press Enter for the default ("").
    key_file> (Press Enter)
    The passphrase to decrypt the PEM-encoded private key file.
     
    Only PEM encrypted key files (old OpenSSH format) are supported. Encrypted keys
    in the new OpenSSH format can't be used.
    y) Yes type in my own password
    g) Generate random password
    n) No leave this optional password blank
    y/g/n> n
    When set forces the usage of the ssh-agent.
    When key-file is also set, the ".pub" file of the specified key-file is read and only the associated key is
    requested from the ssh-agent. This allows to avoid `Too many authentication failures for *username*` errors
    when the ssh-agent contains many keys.
    Enter a boolean value (true or false). Press Enter for the default ("false").
    key_use_agent> (Press Enter)
    Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
    Enter a boolean value (true or false). Press Enter for the default ("false").
    Choose a number from below, or type in your own value
     1 / Use default Cipher list.
       \ "false"
     2 / Enables the use of the aes128-cbc cipher.
       \ "true"
    use_insecure_cipher> (Press Enter)
    Disable the execution of SSH commands to determine if remote file hashing is available.
    Leave blank or set to false to enable hashing (recommended), set to true to disable hashing.
    Enter a boolean value (true or false). Press Enter for the default ("false").
    disable_hashcheck> 
    Edit advanced config? (y/n)
    y) Yes
    n) No
    y/n> n
    Remote config
    -------------------
    [remote_name] 
    type = sftp
    host=(client2 ip)
    user=(client2 user name)
    port = 22
    pass = *** ENCRYPTED ***
    key_file_pass = *** ENCRYPTED ***
    --------------------
    y) Yes this is OK
    e) Edit this remote
    d) Delete this remote
    y/e/d> y
    Current remotes:
     
    Name                 Type
    ====                 ====
    remote_name          sftp 
     
    e) Edit existing remote
    n) New remote
    d) Delete remote
    r) Rename remote
    c) Copy remote
    s) Set configuration password
    q) Quit config
    e/n/d/r/c/s/q> q
    NOTE:

    The IP address of client2 is a public IP address.

  6. Run the following command to view the rclone.conf file in /root/.config/rclone/rclone.conf:

    cat /root/.config/rclone/rclone.conf
    [remote_name]
    type = sftp
    host=(client2 ip)
    user=(client2 user name)
    port = 22
    pass = ***
    key_file_pass = ***

  7. Run the following command on client1 to synchronize data:

    rclone copy /mnt/src remote_name:/mnt/dst -P --transfers 32 --checkers 64
    NOTE:
    • Replace remote_name in the command with the remote name in the environment.
    • Set transfers and checkers based on the system specifications. The parameters are described as follows:
      • transfers: number of files that can be transferred concurrently
      • checkers: number of local files that can be scanned concurrently
      • P: data copy progress

    After data synchronization is complete, go to the SFS Turbo file system to check whether data is migrated.

Verification

  1. Log in to the created Linux ECS.
  2. Run the following commands on the destination server to verify file synchronization:

    cd /mnt/dst
    ls | wc -l

  3. If the data volume is the same as that on the source server, the data is migrated successfully.

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback