Help Center/ Distributed Cache Service/ Data Migration Guide/ Migration from Another Cloud/ Backup Import from Another Cloud Using redis-shake
Updated on 2024-12-11 GMT+08:00

Backup Import from Another Cloud Using redis-shake

redis-shake is an open-source tool for migrating data online or offline (by importing backup files) between Redis Clusters. If the source Redis Cluster is deployed in another cloud, and online migration is not supported, you can migrate data by importing backup files.

If the source Redis and the target Redis cannot be connected, or the source Redis is deployed on other clouds, you can migrate data by importing backup files.

The following describes how to use redis-shake for backup migration to a DCS Redis Cluster instance.

Prerequisites

  • A DCS Redis instance has been created. Note that the memory of the DCS Redis Cluster instance cannot be smaller than that of the source cluster.
  • An ECS has been created for running redis-shake. The ECS must use the same VPC, subnet, and security group as the Redis instance.

Procedure

  1. Access the target Redis instance using redis-cli. Obtain the IP address and port of the master node of the target instance.
    redis-cli -h {target_redis_address} -p {target_redis_port} -a {target_redis_password} cluster nodes
    • {target_redis_address}: connection address of the target DCS Redis instance.
    • {target_redis_port}: port of the target DCS Redis instance.
    • {target_redis_password}: password for connecting to the target DCS Redis instance.

    In the command output similar to the following, obtain the IP addresses and ports of all masters.

  2. Install redis-shake on the prepared Huawei Cloud ECS.
    1. Log in to the Huawei Cloud ECS.
    2. Download redis-shake on the Huawei Cloud ECS. Version 2.0.3 is used as an example. You can use other redis-shake versions as required.
      wget https://github.com/tair-opensource/RedisShake/releases/download/release-v2.0.3-20200724/redis-shake-v2.0.3.tar.gz
    3. Decompress the redis-shake file.
      tar -xvf redis-shake-v2.0.3.tar.gz

    If the source Redis is deployed in the data center intranet, install redis-shake on the intranet server. Export data and then upload the data to the cloud server as instructed by the following steps.

  3. Export the RDB file from the source Redis console. If the RDB file cannot be exported, contact customer service of the source.
  4. Import the RDB file.
    1. Import the RDB file (or files) to the cloud server. The cloud server must be connected to the target DCS instance.
    2. Edit the redis-shake configuration file redis-shake.conf.
      vim redis-shake.conf
      Add the following information about all the masters of the target:
      target.type = cluster
      # If there is no password, skip the following parameter.
      target.password_raw = {target_redis_password}
      # IP addresses and port numbers of all masters of the target instance, which are separated by semicolons (;).
      target.address = {master1_ip}:{master1_port};{master2_ip}:{master2_port}...{masterN_ip}:{masterN_port}
      # List the RDB files to be imported, separated by semicolons (;).
      rdb.input = {local_dump.0};{local_dump.1};{local_dump.2};{local_dump.3}

      Press Esc to exit the editing mode and enter :wq!. Press Enter to save the configuration and exit the editing interface.

    3. Run the following command to import the RDB file to the target instance:
      ./redis-shake -type restore -conf redis-shake.conf

      If the following information is displayed in the execution log, the backup file is imported successfully:

      Enabled http stats, set status (incr), and wait forever.
  5. Verify the migration.

    After data synchronization, access the target Redis Cluster DCS instance using redis-cli. Run the info command to query the number of keys in the Keyspace section to confirm that data has been fully imported.

    If the data has not been fully imported, run the flushall or flushdb command to clear the cached data in the instance, and synchronize data again.