Help Center/ Distributed Cache Service/ User Guide/ Changing an Instance/ Modifying DCS Instance Specifications
Updated on 2024-12-16 GMT+08:00

Modifying DCS Instance Specifications

On the DCS console, you can change DCS Redis or Memcached instance specifications including the instance type, memory, and replica quantity.

  • Modify instance specifications during off-peak hours. If the modification failed in peak hours (for example, when memory or CPU usage is over 90% or write traffic surges), try again during off-peak hours.
  • If your DCS instances are too old to support specification modification, contact customer service to upgrade the instances.
  • Modifying instance specifications does not affect the connection address, password, data, security group, and whitelist configurations of the instance. You do not need to restart the instance.

Change of the Instance Type

Table 1 Instance type change options supported by different DCS instances

Version

Supported Type Change

Precautions

Redis 3.0

From single-node to master/standby

The instance cannot be connected for several seconds and remains read-only for about one minute.

From master/standby to Proxy Cluster

  1. If the data of a master/standby DCS Redis 3.0 instance is stored in multiple databases, or in non-DB0 databases, the instance cannot be changed to the Proxy Cluster type. A master/standby instance can be changed to the Proxy Cluster type only if its data is stored only on DB0.
  2. The instance cannot be connected and remains read-only for 5 to 30 minutes.

Memcached

From single-node to master/standby

Services are interrupted for several seconds and remain read-only for about 1 minute.

Redis 4.0/5.0/6.0

From master/standby or read/write splitting to Proxy Cluster

  1. Before changing the instance type to Proxy Cluster, evaluate the impact on services. For details, see What Are the Constraints on Implementing Multi-DB on a Proxy Cluster Instance? and Command Restrictions.
  2. Memory usage must be less than 70% of the maximum memory of the new flavor.
  3. Some keys may be evicted if the current memory usage exceeds 90% of the total.
  4. After the change, create alarm rules again for the instance.
  5. For instances that are currently master/standby, ensure that their read-only IP address or domain name is not used by your application.
  6. If your application cannot reconnect to Redis or handle exceptions, you may need to restart the application after the change.
  7. Modify instance specifications during off-peak hours. An instance is temporarily interrupted and remains read-only for about 1 minute during the specification change.

From Proxy Cluster to master/standby or read/write splitting

Redis 4.0/5.0/6.0

From master/standby to read/write splitting

NOTE:

Currently, a read/write splitting instance cannot be directly changed to a master/standby one.

  1. The instance memory must be greater than or equal to 4 GB, and will remain the same after the change.
  2. Some keys may be evicted if the current memory usage exceeds 90% of the total.
  3. After the change, create alarm rules again for the instance.
  4. Ensure that read-only IP addresses or domain names are not directly referred in the applications using the master/standby instance.
  5. If your application cannot reconnect to Redis or handle exceptions, you may need to restart the application after the change.
  6. Services may temporarily stutter during the change. Perform the change during off-peak hours.
  7. Unavailable for master/standby instances with ACL users.
  8. Unavailable for master/standby DCS Redis 6.0 instances with SSL enabled.

Any instance type changes not listed in the preceding table are not supported. To modify specifications while changing the instance type, create an instance, migrate data, and switch IPs. For details, see Online Migration Between Instances.

For details about the commands supported by different types of instances, see Command Compatibility.

Scaling

  • Scaling options
    Table 2 Scaling options supported by different instances

    Cache Engine

    Single-Node

    Master/Standby

    Redis Cluster

    Proxy Cluster

    Read/Write Splitting

    Redis 3.0

    Scaling up/down

    Scaling up/down

    -

    Scaling out

    -

    Redis 4.0

    Scaling up/down

    Scaling up/down and replica quantity change

    Scaling up/down, out/in, and replica quantity change

    Scaling up/down, out/in

    Scaling up/down and replica quantity change

    Redis 5.0

    Scaling up/down

    Scaling up/down and replica quantity change

    Scaling up/down, out/in, and replica quantity change

    Scaling up/down, out/in

    Scaling up/down and replica quantity change

    Redis 6.0 basic edition

    Scaling up/down

    Scaling up/down and replica quantity change

    Scaling up/down, out/in, and replica quantity change

    Scaling up/down, out/in

    Scaling up/down and replica quantity change

    Redis 6.0 professional editions

    -

    Scaling up/down

    -

    -

    -

    Memcached

    Scaling up/down

    Scaling up/down

    -

    -

    -

    • If the reserved memory of a DCS Redis 3.0 or Memcached instance is insufficient, the modification may fail when the memory is used up. For details, see Reserved Memory.
    • Change the replica quantity and capacity separately.
    • Only one replica can be deleted per operation.
  • Impact of scaling
    Table 3 Impact of scaling

    Instance Type

    Scaling Type

    Impact

    Single-node , read/write splitting, and master/standby

    Scaling up/down

    • During scaling up, a basic edition DCS Redis 4.0 or later instance will be disconnected for several seconds and remain read-only for about 1 minute. During scaling down, connections will not be interrupted.
    • A DCS Redis 3.0 instance will be disconnected for several seconds and remain read-only for 5 to 30 minutes.
    • A DCS Redis professional edition instance will be disconnected for several seconds and remain read-only for about 1 minute.
    • For scaling up, only the memory of the instance is expanded. The CPU processing capability is not improved.
    • Single-node DCS instances do not support data persistence. Scaling may compromise data reliability. After scaling, check whether the data is complete and import data if required. If there is important data, use a migration tool to migrate the data to other instances for backup.
    • For master/standby and read/write splitting instances, backup records created before scale-down cannot be used after scale-down. If necessary, download the backup file in advance or back up the data again after scale-down.

    Proxy Cluster and Redis Cluster

    Scaling up/down

    • Scaling out by adding shards:
      • Scaling out does not interrupt connections but will occupy CPU resources, decreasing performance by up to 20%.
      • If the shard quantity increases, new Redis Server nodes are added, and data is automatically balanced to the new nodes, increasing the access latency.
    • Scaling in by reducing shards:
      • If the shard quantity decreases, nodes will be deleted. Before scaling in a Redis Cluster instance, ensure that the deleted nodes are not directly referenced in your application, to prevent service access exceptions.
      • Nodes will be deleted, and connections will be interrupted. If your application cannot reconnect to Redis or handle exceptions, you may need to restart the application after scaling.
    • Scaling up by increasing the size per shard:
      • Insufficient memory of the node's VM will cause the node to migrate. Service connections may stutter and the instance may become read-only during the migration.
      • Increasing the node capacity when the VM memory is sufficient does not affect services.
      NOTE:

      Cluster DCS Redis 3.0 instances cannot be vertically scaled.

    • Scaling down by reducing the shard size without changing the shard quantity has no impact.
    • To scale down an instance, ensure that the used memory of each node is less than 70% of the maximum memory per node of the new flavor.
    • The flavor changing operation may involve data migration, and the latency may increase. For a Redis Cluster instance, ensure that the client can process the MOVED and ASK commands. Otherwise, the request will fail.
    • If the memory becomes full during scaling due to a large amount of data being written, scaling will fail.
    • Before scaling, check for big keys through Cache Analysis. Redis has a limit on key migration. If the instance has any single key greater than 512 MB, scaling will fail when big key migration between nodes times out. The bigger the key, the more likely the migration will fail.
    • Before scaling a Redis Cluster instance, ensure that automated cluster topology refresh is enabled. If it is disabled, you will need to restart the client after scaling. For details about how to enable automated refresh if you use Lettuce, see an example of using Lettuce to connect to a Redis Cluster instance.
    • Backup records created before scaling cannot be used. If necessary, download the backup file in advance or back up the data again after scaling.

    Master/Standby, read/write splitting, and Redis Cluster instances

    Scaling out/in (replica quantity change)

    • Before adding or removing replicas for a Redis Cluster instance, ensure that automated cluster topology refresh is enabled. If it is disabled, you will need to restart the client after scaling. For details about how to enable automated refresh if you use Lettuce, see an example of using Lettuce to connect to a Redis Cluster instance.
    • Deleting replicas interrupts connections. If your application cannot reconnect to Redis or handle exceptions, you may need to restart the application after scaling. Adding replicas does not interrupt connections.
    • If the number of replicas is already the minimum supported by the instance, you can no longer delete replicas.

Changing an Instance

  1. Log in to the DCS console.
  2. Click in the upper left corner of the management console and select the region where your instance is located.
  3. In the navigation pane, choose Cache Manager.
  4. Choose More > Modify Specifications of the Operation column in the row containing the DCS instance.
  5. On the Modify Specifications page, select the desired specification.

    To expand the capacity of a single shard of a cluster instance, see Can I Expand a Single Shard of a Cluster Instance?

  6. Set Apply Change to Now or During maintenance.

    Select During maintenance if the modification interrupts connections.

    Table 4 Scenarios where specification modification interrupts connections

    Change

    When Connections Are Interrupted

    Scaling up a single-node or master/standby instance

    Memory is increased from a size smaller than 8 GB to 8 GB or larger.

    Scaling down a Proxy Cluster and Redis Cluster instance

    The number of shards is decreased.

    Changing the instance type

    The instance type is changed between master/standby or read/write splitting and Proxy Cluster.

    Deleting replicas

    Replicas are deleted from a master/standby, Redis Cluster, or read/write splitting instance.

    • If the modification does not interrupt connections, it will be applied immediately even if you select During maintenance.
    • The modification cannot be withdrawn once submitted. To reschedule a modification, you can change the maintenance window. The maintenance window can be changed up to three times.
    • Modifications on DCS Redis 3.0 and Memcached instances can only be applied immediately.
    • If you apply the change during maintenance, the change starts at any time within the maintenance window, rather than at the start time of the window.
    • If a large amount of data needs to be migrated when you scale down a cluster instance, the operation may not be completed within the maintenance window.

  7. Click Next. Confirm the change details and view the risk check results.

    If any risk is found in the check, the instance may fail to be modified. For details, see Table 5.
    Table 5 Risk check items

    Check Item

    Reason for Check

    Solution

    Non-standard configuration check

    NOTE:
    • Currently, non-standard configuration check is available only in some regions, such as CN North-Beijing4, CN East-Shanghai1, and CN East-Shanghai2.
    • Check whether the following items meet standards:
      • Bandwidth of a single instance node
      • Memory of a single instance node
      • Replica quantity of Redis Cluster instances
      • Proxy quantity of Proxy Cluster instances
      • maxclients of Proxy Cluster instances (maximum allowed connections exceeded)

    If your instance has non-standard configurations, the console displays a message indicating that they will be converted to standard during the change.

    You can retain non-standard bandwidth or proxy quantity configuration only.

    • If your instance does not have non-standard configurations, the check result is normal and no action is required.
    • If the instance has non-standard configurations, determine whether to proceed with the change or whether to retain the non-standard bandwidth and proxy quantity configuration.

    Node status

    Abnormal instance nodes cause instance modification failures.

    If this case, contact customer service.

    Dataset memory distribution check

    NOTE:

    This check item applies only to Proxy Cluster and Redis Cluster instances.

    Specification modification of a cluster instance involves data migration between nodes. If an instance has any key bigger than 512 MB, the modification will fail when big key migration between nodes times out.

    If the instance dataset memory is unevenly distributed among nodes and the difference is greater than 512 MB, the instance has a big key and the change may fail.

    Handle big keys before proceeding with the change.

    Memory usage check

    If the memory usage of a node is greater than 90%, keys may be evicted or the change may fail.

    If the memory usage is too high, optimize the memory by optimizing big keys, scanning for expired keys, or deleting some keys.

    Network input traffic check

    NOTE:

    This check item applies only to single-node, read/write splitting, and master/standby instances.

    The change may fail if the network input traffic is too heavy and the write buffer overflows.

    Perform the change during off-peak hours.

    CPU usage check

    If the node CPU usage within 5 minutes is greater than 90%, the change may fail.

    Perform the change during off-peak hours. Troubleshooting High CPU Usage of a DCS Redis Instance

    Resource capacity

    NOTE:

    This item should be checked only when scaling up cluster instances.

    To scale up a cluster instance, if the VM resource capacity is insufficient, the node needs to be migrated during the change. Service connections become intermittent or read-only during the migration.

    If the resource capacity check poses risks, ensure that your application can reconnect to Redis or handle exceptions, you may need to restart the application after the change.

    • If the check results are normal, no risks are found in the check.
    • If the check fails, the possible causes are as follows:
      • The master node of the instance fails to be connected. In this case, check the instance status.
      • The system is abnormal. In this case, click Check Again later.
    • Click Stop Check to stop the check. Click Check Again to restart the check.
    • If you want to proceed with the change despite risks found in the check or after clicking Stop Check, select I understand the risks.

  8. After the risk check is complete, click Next. After the modification is submitted, you can go to the Background Tasks page to view the modification status.

    Click the task name on the Background Tasks page to view task details. After an instance is successfully modified, it changes to the Running state.

    Figure 1 Viewing background task details
    • If the specification modification of a single-node DCS instance fails, the instance is temporarily unavailable. The specification remains unchanged. Some management operations (such as parameter configuration and specification modification) are temporarily not supported. After the specification modification is completed in the backend, the instance changes to the new specification and becomes available for use again.
    • If the specification modification of a master/standby or cluster DCS instance fails, the instance still uses its original specifications. Some management operations (such as parameter configuration, backup, restoration, and specification modification) are temporarily not supported. Remember not to read or write more data than allowed by the original specifications; otherwise, data loss may occur.
    • After the specification modification is successful, the new specification of the instance takes effect.
    • Specification modification of a single-node, master/standby, or read/write splitting DCS instance takes 5 to 30 minutes to complete, while that of a cluster DCS instance takes a longer time.