Optimizing Cache Performance
The following section describes how to optimize cache performance based on specific suggestions and cases.
Suggestions on Using Redis
These suggestions help reduce Redis instability or exception occurrences to ensure a stable system.
Service Usage
Principle |
Description |
Level |
Remarks |
---|---|---|---|
Nearest deployment for low latency |
If the deployment sites are far from each other (across regions) or the latency is high (server-instance connection over a public network), the read/write performance will be greatly affected. |
Required |
If your service is latency-sensitive, do not create cross-AZ DCS Redis instances. |
Cold/Hot data separation |
You can store frequently accessed data (hot data) in Redis, and lesser accessed data (cold data) in MySQL and Elasticsearch databases. |
Suggested |
Infrequently accessed data stored in the memory occupies Redis space and does not accelerate access. |
Service data differentiation |
Store unrelated service data in different Redis instances. |
Required |
This prevents services from affecting each other, and prevents single instances from being too large. This also enables quick service recovery in case of faults. |
Do not use the SELECT command for multi-DB on a single instance. |
Required |
Multi-DB on a single Redis instance causes weak isolation, and is no longer in active development by open-source Redis. You are advised not to depend on this feature in the future. |
|
Proper memory eviction policy |
Redis can still function when the memory is used up unexpectedly. |
Required |
The default policy is volatile-lru. Select a policy as required. |
Redis use as cache |
Do not rely too much on Redis transactions. |
Suggested |
After a transaction is executed, it cannot be rolled back. |
If data is abnormal, the cache can be cleared for data restoration. |
Required |
Redis does not have a mechanism or protocol to ensure strong data consistency. Therefore, services cannot over-rely on the accuracy of Redis data. |
|
When using Redis as cache, set expiration on all keys. Do not use Redis as a database. |
Required |
Set expiration as required, but a longer expiration is not necessarily better. |
|
Cache breakdown prevention |
Use Redis together with local cache. Store frequently accessed data in the local cache and regularly update it asynchronously. |
Suggested |
- |
Cache penetration prevention |
Non-critical path operations are passed through to the database. Limit the rate of access to the database. |
Suggested |
- |
If the requested data is not found in Redis, read-only DB instances are accessed. You can use domain names to connect to read-only DB instances. |
Suggested |
The idea is that the request does not go to the main database. You can use domain names to connect to multiple read-only DB instances. If a fault occurs, you can add such instances for emergency handling. |
|
No use as a message queue |
In pub/sub scenarios, do not use Redis as a message queue. |
Required |
|
Proper specification selection |
If service growth causes increases in Redis requests, use Proxy Cluster or Redis Cluster instances. |
Required |
Scaling up single-node and master/standby instances only expands the memory and bandwidth, but cannot enhance the computing capabilities. |
In production, do not use single-node instances. Use master/standby or cluster instances. |
Required |
- |
|
Do not use large specifications for master/standby instances. |
Suggested |
Redis forks a process when rewriting AOF or running the BGSAVE command. If the memory is too large, responses will be slow. |
|
Degradation or disaster recovery measures |
When a cache miss occurs, data is obtained from the database. Alternatively, when a fault occurs, allow another Redis to take over services automatically. |
Suggested |
- |
Data Design
Category |
Principle |
Description |
Level |
Remarks |
---|---|---|---|---|
Keys |
Keep the format consistent. |
Use the service name or database name as the prefix, followed by colons (:). Ensure that key names have clear meanings. |
Suggested |
For example: service name:sub-service name:ID. |
Minimize the key length. |
Minimize the key length without compromising clarity of the meaning. Abbreviate common words. For example, user can be abbreviated to u, and messages can be abbreviated to msg. |
Suggested |
Use up to 128 bytes. The shorter the better. |
|
Do not use special characters except braces ({}). |
Do not use special characters such as spaces, line breaks, single or double quotation marks, and other escape characters. |
Suggested |
Redis uses braces ({}) to signify hash tags. For cluster instances, braces in key names must be used correctly to avoid unbalanced shards. |
|
Values |
Use appropriate value sizes. |
Keep the value of a key within 10 KB. |
Suggested |
Large values may cause unbalanced shards, hot keys, traffic or CPU usage surges, and scaling or migration failures. These problems can be avoided by proper design. |
Use appropriate number of elements in each key. |
Do not include too many elements in each Hash, Set, or List. It is recommended that each key contains up to 5,000 elements. |
Suggested |
Time complexity of some commands, such as HGETALL, is directly related to the quantity of elements in a key. If commands whose time complexity is O(N) or higher are frequently executed and a key has a large number of elements, there may be slow requests, unbalanced shards, or hot keys. |
|
Use appropriate data types. |
This saves memory and bandwidth. |
Suggested |
For example, to store multiple attributes of a user, you can use multiple keys, such as set u:1:name "X" and set u:1:age 20. To save memory usage, you can also use the HMSET command to set multiple fields to their respective values in the hash stored at one key. |
|
Set appropriate timeout. |
Do not set a large number of keys to expire at the same time. |
Suggested |
When setting key expiration, add or subtract a random offset from a base expiry time, to prevent a large number of keys from expiring at the same time. Otherwise, CPU usage will be high at the expiry time. |
Command Use
Principle |
Description |
Level |
Remarks |
Exercise caution when using commands with time complexity of O(N). |
Pay attention to the value of N for commands whose time complexity is O(N). If the value of N is too large, Redis will be blocked and the CPU usage will be high. |
Required |
For example, the HGETALL, LRANGE, SMEMBERS, ZRANGE, and SINTER commands will consume a large number of CPU resources if there is a large number of elements. Alternatively, you can use SCAN sister commands, such as HSCAN, SSCAN, and ZSCAN commands. |
Do not use high-risk commands. |
Do not use high-risk commands such as FLUSHALL, KEYS, and HGETALL, or rename them. |
Required |
|
Exercise caution when using the SELECT command. |
Redis does not have a strong support for multi-DB. Redis is single-threaded, so databases interfere with each other. You are advised to use multiple Redis instances instead of using shared databases for one instance. |
Suggested |
- |
Use batch operations to improve efficiency. |
For batch operations, use the MGET command, MSET command, or pipelining to improve efficiency, but do not include a large number of elements in one batch operation. |
Suggested |
MGET command, MSET command, and pipelining differ in the following ways:
|
Do not use time-consuming code in Lua scripts. |
The timeout of Lua scripts is 5s, so avoid using long scripts. |
Required |
Long scripts: time-consuming sleep statements or long loops. |
Do not use random functions in Lua scripts. |
When invoking a Lua script, do not use random functions to specify keys. Otherwise, the execution results will be inconsistent between the master and replica nodes, causing data inconsistency. |
Required |
- |
Follow the rules for using Lua on cluster instances. |
Follow the rules for using Lua on cluster instances. |
Required |
|
Optimize multi-key operation commands such as MGET and HMGET with parallel processing and non-blocking I/O. |
Some clients do not treat these commands differently. Keys in such a command are processed sequentially before their values are returned in a batch. This process is slow and can be optimized through pipelining. |
Suggested |
For example, running the MGET command on a cluster using Lettuce is dozens of times faster than using Jedis, because Lettuce uses pipelining and non-blocking I/O while Jedis does not have a special plan itself. To use Jedis in such scenarios, you need to implement slot grouping and pipelining by yourself. |
Do not use the DEL command to directly delete big keys. |
Deleting big keys, especially Sets, using DEL blocks other requests. |
Required |
In Redis 4.0 and later, you can use the UNLINK command to delete big keys safely. This command is non-blocking. In versions earlier than Redis 4.0:
|
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot