Proxy Cluster Redis

DCS for Redis provides two types of cluster instances: Proxy Cluster and Redis Cluster. Proxy Cluster is compatible with Redis 3.0, 4.0, and 5.0, and uses Linux Virtual Server (LVS) and proxies to achieve high availability. Redis Cluster is the native distributed implementation of Redis and is compatible with Redis 4.0 and 5.0.

Read/write splitting is supported by Redis Clusters (Redis 4.0 and 5.0) but not by Proxy Clusters (Redis 3.0, 4.0, and 5.0). Read more about DCS's support for read/write splitting.

This section describes Proxy Cluster DCS Redis 3.0, 4.0, and 5.0 instances.

  • DCS for Redis 3.0 is no longer provided. You can use DCS for Redis 4.0 or 5.0 instead.
  • You cannot upgrade the Redis version for an instance. For example, a Proxy Cluster DCS Redis 3.0 instance cannot be upgraded to a Proxy Cluster DCS Redis 4.0 or 5.0 instance. If your service requires the features of higher Redis versions, create a DCS Redis instance of a higher version and then migrate data from the old instance to the new one.
  • A Proxy Cluster instance can be connected in the same way that a single-node or master/standby instance is connected, without any special settings on the client. You can use the IP address or domain name of the instance, and do not need to know or use the proxy or shard addresses.

Proxy Cluster DCS Redis 3.0 Instances

Proxy Cluster DCS Redis 3.0 instances are based on x86, compatible with open source codis and come with specifications ranging from 64 GB to 1024 GB, meeting requirements for millions of concurrent connections and massive data cache. Distributed data storage and access is implemented by DCS, without requiring development or maintenance.

Each Proxy Cluster instance consists of load balancers, proxies, cluster managers, and shards.

Table 1 Specifications of Proxy Cluster DCS Redis 3.0 instances

Total Memory

Proxies

Shards

64 GB

3

8

128 GB

6

16

256 GB

8

32

512 GB

16

64

1024 GB

32

128

Figure 1 Architecture of a Proxy Cluster DCS Redis 3.0 instance

Architecture description:

  • VPC

    The VPC where all nodes of the instance are run.

    If public access is not enabled for the instance, ensure that the client and the instance are in the same VPC and configure security group rules for the VPC.

    If public access is enabled for the instance, the client can be deployed outside of the VPC to access the instance through the EIP bound to the instance.

    For more information, see Public Access to a DCS Redis 3.0 Instance and How Do I Configure a Security Group?

  • Application

    The client used to access the instance.

    DCS Redis instances can be accessed using open-source clients. For examples of accessing DCS instances with different programming languages, see Accessing a DCS Redis Instance.

  • LB-M/LB-S

    The load balancers, which are deployed in master/standby HA mode. The connection addresses (IP address:Port and Domain Name:Port) of the cluster DCS Redis instance are the addresses of the load balancers.

  • Proxy

    The proxy server used to achieve high availability and process high-concurrency client requests.

    You can connect to a Proxy Cluster instance at the IP addresses of its proxies.

  • Redis shard

    A shard of the cluster.

    Each shard consists of a pair of master/standby nodes. If the master node becomes faulty, the standby node automatically takes over cluster services.

    If both the master and standby nodes of a shard are faulty, the cluster can still provide services but the data on the faulty shard is inaccessible.

  • Cluster manager

    The cluster configuration managers, which store configurations and partitioning policies of the cluster. You cannot modify the information about the configuration managers.

Proxy Cluster DCS Redis 4.0 and 5.0 Instances

Proxy Cluster DCS Redis 4.0 and 5.0 instances are provided only in some regions.

Proxy Cluster DCS Redis 4.0 and 5.0 instances are built based on open-source Redis 4.0 and 5.0 and compatible with open source codis. They provide multiple large-capacity specifications ranging from 4 GB to 1024 GB and support the x86 and Arm CPU architectures.

Proxy Cluster instances do not support shard and replica customization. By default, each shard has two replicas. Table 2 lists the shard configuration for different instance specifications.

Memory per shard=Instance specification/Number of shards. For example, if a 48 GB instance has 6 shards, the size of each shard is 48 GB/6 = 8 GB.

Table 2 Specifications of Proxy Cluster DCS Redis 4.0 and 5.0 instances

Total Memory

Shards

Memory per Shard (GB)

4 GB

3

1.33

8 GB

3

2.67

16 GB

3

5.33

24 GB

3

8

32 GB

3

10.67

48 GB

6

8

64 GB

8

8

96 GB

12

8

128 GB

16

8

192 GB

24

8

256 GB

32

8

384 GB

48

8

512 GB

64

8

768 GB

96

8

1024 GB

128

8

Figure 2 Architecture of a Proxy Cluster DCS Redis 4.0 or 5.0 instance

Architecture description:

  • VPC

    The VPC where all nodes of the instance are run.

    The client and the cluster instance must be in the same VPC, and the instance whitelist must allow access from the client IP address.

  • Application

    The client used to access the instance.

    DCS Redis instances can be accessed using open-source clients. For examples of accessing DCS instances with different programming languages, see Accessing a DCS Redis Instance.

  • VPC endpoint service

    You can configure your DCS Redis instance as a VPC endpoint service and access the instance at the VPC endpoint service address.

    The IP address or domain name address of the Proxy Cluster DCS Redis instance is the address of the VPC endpoint service.

  • ELB

    The load balancers are deployed in cluster HA mode and support multi-AZ deployment.

  • Proxy

    The proxy server used to achieve high availability and process high-concurrency client requests.

    You cannot connect to a Proxy Cluster instance at the IP addresses of its proxies.

  • Redis Cluster

    A shard of the cluster.

    Each shard consists of a pair of master/replica nodes. If the master node becomes faulty, the replica node automatically takes over cluster services.

    If both the master and standby nodes of a shard are faulty, the cluster can still provide services but the data on the faulty shard is inaccessible.