Help Center/ TaurusDB/ User Guide/ RegionlessDB Clusters/ RegionlessDB Cluster Overview
Updated on 2024-11-06 GMT+08:00

RegionlessDB Cluster Overview

A RegionlessDB cluster consists of multiple GaussDB(for MySQL) instances in different regions around the world. Currently, a RegionlessDB cluster consists of one primary instance (in the primary region) and up to five standby instances (in standby regions). Data is synchronized between primary and standby instances, providing nearby access and regional DR capabilities.

Figure 1 RegionlessDB cluster principle

Scenarios

  • Remote multi-active deployment

    Data is synchronized among instances in a RegionlessDB cluster. For lower network latency and quicker resource access, you can select the instance nearest to your workloads.

  • Remote disaster recovery

    If there is a region-level fault on the primary instance, workloads can be switched to a standby instance for remote DR.

Architecture

Figure 2 Architecture
  • Cross-region deployment is supported. Redo logs generated in the primary instance are synchronized to a standby instance and written to DFV storage. Pages required for database access are replayed. For details, see Figure 2. (Data is synchronized based on the replication node Source of the primary instance and the replication node Target of the standby instance.)
  • In the primary instance, the read replica obtains required redo logs and pages from DFV storage through the primary node. In the standby instance, the read replica obtains required redo logs and pages from DFV storage through the replication node Target.

Advantages

  • Global deployment and nearby data access

    Instances in a RegionlessDB cluster are from different regions around the world. Data generated by the primary instance can be directly read from the nearest standby instance.

  • Low latency of cross-region replication

    Redo logs are directly and uninterruptedly read from the DFV storage for asynchronous replication. The replication latency is less than 1 second thanks to high-throughput parallel data synchronization.

  • No downtime for the primary node during data synchronization

    The replication node of the primary instance reads data from different nodes in the DFV storage in parallel for synchronization. This means that the primary node does not need to directly synchronize data to the standby instances. Instead, it only needs to update the location information of redo logs in the storage to the replication node of the primary instance. In this way, workloads on the primary node are not affected.

  • Too many read replicas

    There are up to five standby instances in a cluster, and each standby instance supports up to 15 read replicas.

    When you are creating a DB instance, a maximum of 10 read replicas can be created at a time.

  • Region-level disaster recovery

    If there is a region-level fault on the primary instance, workloads can be quickly switched to a standby instance for remote DR, achieving an RPO in minutes and an RTO in seconds.

    • If you need to use quick DR, contact customer service.
    • Recovery Point Objective (RPO): the maximum data loss amount tolerated by the system.
    • Recovery Time Objective (RTO): the maximum service interruption duration tolerated by the system. It refers to the requirement for the recovery duration of an information system failure or service function failure caused by a disaster.

Constraints

  • Only pay-per-use instances can be created.
  • The kernel version must be 2.0.46.231000 or later, and the primary instance must be a new instance.
  • The instances in a RegionlessDB cluster cannot use 192.168.0.0/16 as their subnet CIDR block.
  • The subnet CIDR blocks of the primary and standby instances in different regions must be different.
  • When a standby instance is created, data needs to be synchronized from the primary instance. The time required depends on how much data there is.
  • The primary instance in a RegionlessDB cluster cannot be restored to the original instance, and other instances cannot be restored to any instance in a RegionlessDB cluster.
  • If you create proxy instances or HTAP instances for a GaussDB(for MySQL) instance, the GaussDB(for MySQL) instance cannot be used as an instance in a RegionlessDB cluster. Delete the proxy instances or HTAP instances first.
  • The primary instance does not support the following operations:
    • Changing a database port
    • Changing a private IP address
    • Creating an HTAP instance
    • Creating a proxy instance
  • The standby instance does not support the following operations:
    • Resetting a password
    • Creating and restoring a backup
    • Creating an account
    • Authorizing an account
    • Creating a proxy instance
    • Creating an HTAP instance
    • Promoting a read replica to the primary node
    • Changing a database port
    • Changing a private IP address
    • Modifying auto scaling policies
  • Data across regions is synchronized through a network. The VPN bandwidth must be greater than the write bandwidth of the primary instance in a RegionlessDB cluster.
  • In large-scale DDL scenarios, the replication latency may fluctuate for more than 1 second.
  • RegionlessDB clusters do not support OpenAPIs.
  • A RegionlessDB cluster consists of one primary instance (in the primary region) and up to five standby instances (in standby regions). The primary instance processes read and write requests and the standby instances process read-only requests. Table 1 lists the maximum specifications supported by a RegionlessDB cluster.
    Table 1 Specifications

    Description

    Primary Instance

    Standby Instance

    Max. Instances

    1

    5

    Max. Read/Write Nodes per Instance

    1

    0

    Max. Read-only Nodes per Instance

    15

    15

    When you are creating a DB instance, a maximum of 10 read replicas can be created at a time.