Help Center/ Distributed Cache Service/ FAQs/ Instance Scaling and Upgrade/ Are Services Interrupted During Specification Modification?
Updated on 2024-06-19 GMT+08:00

Are Services Interrupted During Specification Modification?

Modify instance specifications during off-peak hours.

If the modification failed in peak hours (for example, when memory or CPU usage is over 90% or write traffic surges), try again during off-peak hours.

The following table describes the impact of specification modification.

Change of the Instance Type

Table 1 Instance type change options supported by different DCS instances

Version

Supported Type Change

Precautions

Redis 4.0/5.0

From master/standby or read/write splitting to Proxy Cluster

  1. Before changing the instance type to Proxy Cluster, evaluate the impact on services. For details, see What Are the Constraints on Implementing Multi-DB on a Proxy Cluster Instance? and Command Restrictions.
  2. Memory usage must be less than 70% of the maximum memory of the new flavor.
  3. Some keys may be evicted if the current memory usage exceeds 90% of the total.
  4. After the change, create alarm rules again for the instance.
  5. For instances that are currently master/standby, ensure that their read-only IP address or domain name is not used by your application.
  6. If your application cannot reconnect to Redis or handle exceptions, you may need to restart the application after the change.
  7. Modify instance specifications during off-peak hours. An instance is temporarily interrupted and remains read-only for about 1 minute during the specification change.

From Proxy Cluster to master/standby or read/write splitting

Any instance type changes not listed in the preceding table are not supported. To modify specifications while changing the instance type, see IP Switching.

For details about the commands supported by different types of instances, see Command Compatibility.

Scaling

  • Scaling options
    Table 2 Scaling options supported by different instances

    Cache Engine

    Single-Node

    Master/Standby

    Redis Cluster

    Proxy Cluster

    Read/Write Splitting

    Redis 4.0

    Scaling up/down

    Scaling up/down and replica quantity change

    Scaling up/down, out/in, and replica quantity change

    Scaling up/down and out/in

    Scaling up/down and replica quantity change

    Redis 5.0

    Scaling up/down

    Scaling up/down and replica quantity change

    Scaling up/down, out/in, and replica quantity change

    Scaling up/down and out/in

    Scaling up/down and replica quantity change

    Redis 6.0

    Scaling up/down

    Scaling up/down

    -

    -

    -

  • Impact of scaling
    Table 3 Impact of scaling

    Instance Type

    Scaling Type

    Impact

    Single-node , read/write splitting, and master/standby

    Scaling up/down

    • During scaling up, a DCS Redis 4.0/5.0/6.0 instance will be disconnected for several seconds and remain read-only for about 1 minute. During scaling down, connections will not be interrupted.
    • For scaling up, only the memory of the instance is expanded. The CPU processing capability is not improved.
    • Single-node DCS instances do not support data persistence. Scaling may compromise data reliability. After scaling, check whether the data is complete and import data if required. If there is important data, use a migration tool to migrate the data to other instances for backup.
    • For master/standby and read/write splitting instances, backup records created before scale-down cannot be used after scale-down. If necessary, download the backup file in advance or back up the data again after scale-down.

    Proxy Cluster and Redis Cluster

    Scaling up/down

    • Scaling out by adding shards:
      • Scaling out does not interrupt connections but will occupy CPU resources, decreasing performance by up to 20%.
      • If the shard quantity increases, new Redis Server nodes are added, and data is automatically balanced to the new nodes, increasing the access latency.
    • Scaling in by reducing shards:
      • If the shard quantity decreases, nodes will be deleted. Before scaling in a Redis Cluster instance, ensure that the deleted nodes are not directly referenced in your application, to prevent service access exceptions.
      • Nodes will be deleted, and connections will be interrupted. If your application cannot reconnect to Redis or handle exceptions, you may need to restart the application after scaling.
    • Scaling up by shard size without changing the shard quantity: Currently unavailable.
    • Scaling down by reducing the shard size without changing the shard quantity has no impact.
    • To scale down an instance, ensure that the used memory of each node is less than 70% of the maximum memory per node of the new flavor.
    • The flavor changing operation may involve data migration, and the latency may increase. For a Redis Cluster instance, ensure that the client can process the MOVED and ASK commands. Otherwise, the request will fail.
    • If the memory becomes full during scaling due to a large amount of data being written, scaling will fail.
    • Before scaling, check for big keys through Cache Analysis. Redis has a limit on key migration. If the instance has any single key greater than 512 MB, scaling will fail when big key migration between nodes times out. The bigger the key, the more likely the migration will fail.
    • Before scaling a Redis Cluster instance, ensure that automated cluster topology refresh is enabled. If it is disabled, you will need to restart the client after scaling. For details about how to enable automated refresh if you use Lettuce, see an example of using Lettuce to connect to a Redis Cluster instance.
    • Backup records created before scaling cannot be used. If necessary, download the backup file in advance or back up the data again after scaling.

    Master/standby, read/write splitting, and Redis Cluster instances

    Scaling out/in (replica quantity change)

    • Before adding or removing replicas for a Redis Cluster instance, ensure that automated cluster topology refresh is enabled. If it is disabled, you will need to restart the client after scaling. For details about how to enable automated refresh if you use Lettuce, see an example of using Lettuce to connect to a Redis Cluster instance.
    • Deleting replicas interrupts connections. If your application cannot reconnect to Redis or handle exceptions, you may need to restart the application after scaling. Adding replicas does not interrupt connections.
    • If the number of replicas is already the minimum supported by the instance, you can no longer delete replicas.