Self-Hosted Redis Migration with redis-port (RDB)
Introduction
redis-port is an open-source batch data transmission tool used for database synchronization between Redis nodes. redis-port provides the following functions:
- dump
Generates a cache snapshot and exports the cached data as an RDB file.
- decode
- restore
- sync
Synchronizes the data in a Redis instance to another instance.
In this section, redis-port V2.0-beta (Linux) is used to describe how to migrate Redis data from HUAWEI CLOUD ECS to DCS.
Step 1: Installing redis-port
Download and decompress the tool package. redis-port can be used directly without compilation.
Install redis-port on both the ECS for data export and the ECS for data import.
wget https://github.com/CodisLabs/redis-port/releases/download/v2.0-beta/redis-port-v2.0-beta-go1.10.1-linux.tar.gz
tar -xvf redis-port-v2.0-beta-go1.10.1-linux.tar.gz
Step 2: Exporting Data
redis-dump -n 3 -m {password}@{source-redis-host}:{port} -o {outputfile.rdb}
Note: -n indicates that multiple CPUs process the export task concurrently.
- The command format may vary depending on the version of installed redis-port. For details, see the related help document.
- During data export and import, redis-port does not support certain special characters in passwords, such as the at sign (@) and number sign (#). If any connection information (such as passwords, instance addresses, and port numbers) fails to be parsed, remove the special characters from passwords temporarily.
- When exporting Redis Cluster data, individually export the data of each node in the cluster, and then import the data node by node. For details, see What Are the Constraints and Precautions for Migrating Redis Data to a Cluster Instance?
- redis-port-v2.0-beta-Go1.9.5 and earlier versions do not support migration of sorted sets. Sorted sets will change to sets after being imported using redis-port of such versions.
- The SYNC command is not supported by DCS Reds 4.0 or 5.0 instances and cannot be used to export RDB files. To back up master/standby instance data, use the backup and restoration function provided by the DCS console.
Parameter description:
root@redis-nodelete:~/port/redis-port-v2.0-beta-go1.10.1-linux# ./redis-dump --help
Usage:
redis-dump [--ncpu=N] (--master=MASTER|MASTER) [--output=OUTPUT] [--aof=FILE]
redis-dump --version
Options:
-n N, --ncpu=N Set runtime.GOMAXPROCS to N.
-m MASTER, --master=MASTER The master redis instance ([auth@]host:port).
-o OUTPUT, --output=OUTPUT Set output file. [default: /dev/stdout].
-a FILE, --aof=FILE Also dump the replication backlog.
Examples:
$ redis-dump 127.0.0.1:6379 -o dump.rdb
$ redis-dump 127.0.0.1:6379 -o dump.rdb -a
$ redis-dump -m passwd@192.168.0.1:6380 -o dump.rdb -a dump.aof
Example:
root@redis-nodelete:~/port/redis-port-v2.0-beta-go1.10.1-linux# ./redis-dump -n 3 -m Heru+123@192.168.0.196:6379 -o save196.rdb 2018/03/26 09:10:28 dump.go:68: [INFO] dump: master = "Heru+123@192.168.0.196:6379", output = "save196.rdb", aoflog = "" 2018/03/26 09:10:29 dump.go:111: [INFO] dump: runid = "a62dda896a855aef4a5429fd36fc4268882bc715", offset = 204541 2018/03/26 09:10:29 dump.go:112: [INFO] dump: rdb file = 46721058 (44.56mb) 2018/03/26 09:10:29 dump.go:151: [INFO] dump: (w,a) = (rdb,aof) 2018/03/26 09:10:29 dump.go:181: [INFO] dump: rdb = 46721058 - [100.00%] (w,a)=(46721058,0) ~ (44.56mb,0) 2018/03/26 09:10:29 dump.go:185: [INFO] dump: done root@redis-nodelete:~/port/redis-port-v2.0-alpha-go1.9.2-linux#
Step 3: Transmitting Data to HUAWEI CLOUD ECS
- To save the transmission time, compress the RDB file before transmission.
- Upload the compressed file to HUAWEI CLOUD ECS using an appropriate mode (for example, SFTP mode).
Ensure that the ECS has sufficient disk space for data file decompression, and can communicate with the DCS instance. Generally, the ECS and DCS instance are configured to belong to the same VPC and subnet, and the configured security group rules do not restrict access ports. For details on how to configure a security group, see How Do I Configure a Security Group?
Step 4: Importing Data
redis-restore -n {N} -i {outputfile.rdb} -t {password}@{dcs_instance_address}:{port} [--unixtime-in-milliseconds="yyyy-MM-dd hh:mm:ss"] [--db={DB_number}]
By specifying the db parameter, you can import the cached data of the specified DB in the file. -n indicates that multiple CPUs process the import task concurrently.
Parameter description:
root@redis-nodelete:~/port/redis-port-v2.0-beta-go1.10.1-linux# ./redis-restore --help
Usage:
redis-restore [--ncpu=N] [--input=INPUT|INPUT] --target=TARGET [--aof=FILE] [--db=DB] [--unixtime-in-milliseconds=EXPR]
redis-restore --version
Options:
-n N, --ncpu=N Set runtime.GOMAXPROCS to N.
-i INPUT, --input=INPUT Set input rdb encoded file.
-t TARGET, --target=TARGET The target redis instance ([auth@]host:port).
-a FILE, --aof=FILE Also restore the replication backlog.
--db=DB Accept db = DB, default is *.
--unixtime-in-milliseconds=EXPR Update expire time when restoring objects from RDB.
Examples:
$ redis-restore dump.rdb -t 127.0.0.1:6379
$ redis-restore -i dump.rdb -t 127.0.0.1:6379 --aof dump.aof --db=1
$ redis-restore -t 127.0.0.1:6379 --aof dump.aof
$ redis-restore -t 127.0.0.1:6379 --db=0
$ redis-restore -i dump.rdb -t 127.0.0.1:6379 --unixtime-in-milliseconds="@209059200000" // ttlms += (now - '1976-08-17')
$ redis-restore -i dump.rdb -t 127.0.0.1:6379 --unixtime-in-milliseconds="+1000" // ttlms += 1s
$ redis-restore -i dump.rdb -t 127.0.0.1:6379 --unixtime-in-milliseconds="-1000" // ttlms -= 1s
$ redis-restore -i dump.rdb -t 127.0.0.1:6379 --unixtime-in-milliseconds="1976-08-17 00:00:00" // ttlms += (now - '1976-08-17') Example:
root@redis-nodelete:~/port/redis-port-v2.0-beta-go1.10.1-linux# ./redis-restore -i save196.rdb -t Heru+123@192.168.0.171:6379 2018/03/26 09:15:33 restore.go:70: [INFO] restore: input = "save196.rdb", aoflog = "" target = "Heru+123@192.168.0.171:6379" 2018/03/26 09:15:33 restore.go:126: [INFO] restore: (r,f,s/a,f,s) = (rdb,rdb.forward,rdb.skip/aof,rdb.forward,rdb.skip) 2018/03/26 09:15:34 restore.go:155: [INFO] restore: size = 46721058 - [ 49.94%, 0.00%] (r,f,s/a,f,s)=(23330816,0,599496/0,0,0) ~ (22.25mb,-,-/0,-,-) 2018/03/26 09:15:35 restore.go:155: [INFO] restore: size = 46721058 - [ 99.31%, 0.00%] (r,f,s/a,f,s)=(46399488,12558,1179884/0,0,0) ~ (44.25mb,-,-/0,-,-) 2018/03/26 09:15:35 restore.go:155: [INFO] restore: size = 46721058 - [100.00%, 0.00%] (r,f,s/a,f,s)=(46721058,20000,1179884/0,0,0) ~ (44.56mb,-,-/0,-,-) 2018/03/26 09:15:35 restore.go:159: [INFO] restore: done root@redis-nodelete:~/port/redis-port-v2.0-alpha-go1.9.2-linux#
Step 5: Verifying Migration
After the data is imported successfully, access the DCS instance and run the info command to check whether the data has been successfully imported as required.
If the data import fails, analyze the cause, modify the data import statement, run the flushall or flushdb command to clear the cached data in the instance, and import the data again.
Last Article: Self-Hosted Redis Migration with redis-cli (RDB)
Next Article: Self-Hosted Codis Migration with redis-cli and redis-port
Did this article solve your problem?
Thank you for your score!Your feedback would help us improve the website.