Updated on 2025-12-05 GMT+08:00

Proxy Features

Proxies can route requests, balance workloads, and implement failover. Clients do not need to implement the complex logic of Redis Cluster. You can use the proxy cluster instance as a single-node Redis instance. By learning the routing mechanism of proxies and policies for processing specific commands, you can build a more efficient service architecture.

Introduction to Proxy

Table 1 Functions

Capability

Description

Compatibility with a standalone Redis node, master/replica Redis instance, Redis Sentinel, and Redis Cluster

A proxy cluster instance is compatible with a standalone Redis node, master/replica Redis instance, Redis Sentinel, and Redis Cluster.

If standard (primary/standby) instances cannot accommodate to service development, you can migrate data from standard instances to proxy cluster instances, check the multi-key command usage and whether keys can be split, and determine whether refactoring is required. This can greatly reduce the costs.

Route forwarding

Proxies establish persistent connections with shards, receive requests from clients, and forward requests to the corresponding shards. After that, Proxies send back the results to the clients.

  • For a command with a single key, proxies send the key to the corresponding shard slot based on the cluster routing information.
  • For a command with multiple keys, proxies split the command into multiple commands and send them to the corresponding shards.

If a shard is faulty, proxies automatically update the routing information after HA is triggered on data nodes. When a fault occurs, proxies retry commands to the faulty node within 5 seconds. If the retry fails, a few errors may be reported. The client must have the retry mechanism.

Multiple databases

Open-source Redis Cluster does not support multiple databases. Proxy cluster GeminiDB Redis instances support 1,000 databases by default and the SELECT command.

Connecting to Proxy Cluster Instances

You can access all data in a cluster through any proxy node. If you connect to a proxy node directly, traffic may be unevenly distributed. If a proxy node is faulty, service continuity may be affected. To ensure high availability, an exclusive load balancer address is allocated to each primary/standby and proxy cluster GeminiDB instance. Load balancers are recommended.

  • Load balancers provide a unified cluster access entry to monitor the health of each proxy node. If a proxy node is faulty, a load balancer automatically removes the node from the service pool to ensure high availability.
  • Load balancers and each proxy node have the same weight. By default, traffic is balanced based on weighted least connections. For details about how to configure traffic distribution policies, see Traffic Distribution Policies.
  • You can change load balancer specifications. A single load balancer supports a maximum of 10 Gbit/s bandwidth. Maximum bandwidth of a single proxy cluster instance = min(Load balancer assured bandwidth, Node bandwidth × Node count).

Proxy Command Splitting Rules

Compared with native cluster instances, proxy cluster instances can split some multi-key commands, allocate different keys to backend nodes, aggregate the results at the proxy layer, and return the results to the client. This simplifies the logic of multi-key operations. The following commands can be split for proxy cluster GeminiDB Redis instances:

  • Key management: DEL, EXISTS, UNLINK, and TOUCH
  • String: MGET and MSET
  • SET: SDIFF, SDIFFSTORE, SINTER, SINTERSTORE, SINTERCARD, SUNION, and SUNIONSTORE
  • ZSET: ZINTER, ZINTERSTORE, ZINTERCARD, ZUNION, ZUNIONSTORE, ZDIFF, ZDIFFSTORE, and ZRANGESTORE

Multiple commands in a transaction can be split. If a transaction contains multi-key commands that cannot be split, hashtags must be added to keys involved in these commands.

Hashtags are recommended in proxy cluster instances to ensure the atomicity and performance of multi-key command operations. For details, see:

Proxy Cluster Connections

In normal cases, proxies establish shared persistent connections with shards to efficiently process requests. If the requests contain the following commands, the proxies create extra connections on shards. Persistent connections from proxies to the shard connection pools cannot be reused. You need to control the following commands to prevent proxy-to-shard connection failures caused by excessive connections on a shard.

Blocking commands: BRPOP, BRPOPLPUSH, BLPOP, BZPOPMAX, BZPOPMIN, BLMPOP, BZMPOP, and BLMOVE

Subscription commands: SUBSCRIBE, SSUBSCRIBE, and PSUBSCRIBE