Help Center/ TaurusDB/ User Guide/ RegionlessDB Clusters (OBT)/ Introduction to RegionlessDB Clusters
Updated on 2025-12-11 GMT+08:00

Introduction to RegionlessDB Clusters

What Is a RegionlessDB Cluster?

A RegionlessDB cluster consists of multiple TaurusDB instances in different regions around the world. Currently, a RegionlessDB cluster consists of one primary instance (in the primary region) and up to five standby instances (in standby regions). The primary instance processes both read and write requests, while the standby instances only process read requests. Data is synchronized between primary and standby instances, providing nearby access and regional DR capabilities.

Figure 1 RegionlessDB cluster principle

RegionlessDB Architecture

As shown in Figure 2, a RegionlessDB cluster can be deployed across regions. The data flow is as follows:

  1. Redo logs generated by the primary instance are written into the DFV storage pool. The DFV storage pool stores the redo logs and replays data pages.
  2. The replication node (source node) of the primary instance reads redo logs from the DFV storage pool and sends the redo logs to the replication node (target node) of the standby instance. The replication nodes are invisible to users and are used to replicate data between the primary and standby instances.
  3. The replication node (target node) of the standby instance receives redo logs generated by the primary instance, writes the redo logs into the DFV storage pool, and replays data pages required for database access.
Figure 2 RegionlessDB architecture

Application Scenarios

Table 1 RegionlessDB application scenarios

Scenario

Principle

Highlight

Remote multi-active deployment

Read requests are sent to a standby instance in the nearest region, and write requests are automatically forwarded from the nearest region to the primary instance. After data is written to the primary instance, the data is synchronized to all standby instances, reducing the cross-region network latency.

Data is synchronized among instances in a RegionlessDB cluster. For lower network latency and quicker resource access, you can select the instance nearest to your workloads.

Remote DR

If the primary AZ of the primary instance is faulty, workloads are preferentially switched to the standby AZ. If both the primary and standby AZs of the primary instance are faulty, workloads are switched to a standby instance.

If there is a region-level fault on the primary instance, workloads can be switched to a standby instance for remote DR.

RegionlessDB Advantages

  • Global deployment and nearby data access

    Instances in a RegionlessDB cluster are from different regions around the world. Data generated by the primary instance can be directly read from the nearest standby instance.

  • Low latency of cross-region replication

    Redo logs are directly and uninterruptedly read from the DFV storage for asynchronous replication. The replication latency is less than 1 second thanks to high-throughput parallel data synchronization.

  • No downtime for the primary node during data synchronization

    The replication node of the primary instance reads data from different nodes in the DFV storage in parallel for synchronization. This means that the primary node does not need to directly synchronize data to the standby instances. Instead, it only needs to update the location information of redo logs in the storage to the replication node of the primary instance. In this way, workloads on the primary node are not affected.

  • Too many read replicas

    There are up to five standby instances in a cluster, and each standby instance supports up to 15 read replicas. All nodes of the standby instances are in the read-only state. Table 2 lists the maximum number of instances and nodes supported by a RegionlessDB cluster.

    Table 2 Instance and node quantity description

    Description

    Primary Instance

    Standby Instance

    Max. Instances

    1

    5

    Max. Read/Write Nodes per Instance

    1

    0

    Max. Read-only Nodes per Instance

    15

    15

  • Region-level disaster recovery

    If there is a region-level fault on the primary instance, workloads can be quickly switched to a standby instance for remote DR, achieving an RPO in minutes and an RTO in seconds.

    To use region-level disaster recovery, submit a service ticket.

Constraints

Table 3 RegionlessDB constraints

Scenario

Constraint

Before use

Before using this feature, you need to obtain the data security compliance requirements of the local region and evaluate the compliance with related laws and regulations.

Phase

RegionlessDB clusters are in the OBT phase. To use them, submit a service ticket.

Version

The kernel version must be 2.0.46.231000 or later, and the primary instance must be a new instance.

For details about how to check the kernel version, see How Can I Check the Version of a TaurusDB Instance?

Billing

Only pay-per-use and yearly/monthly instances can be created.

Network

  • The instances in a RegionlessDB cluster cannot use 192.168.0.0/16 as their subnet CIDR block.
  • The subnet CIDR blocks of the primary and standby instances in different regions must be different.
  • Data across regions is synchronized through a network. The Virtual Private Network (VPN) bandwidth must be greater than the write bandwidth of the primary instance in a RegionlessDB cluster.
  • To enable communication between regions, you need to create a VPN in advance. For details about how to create a VPN, see Configuring Enterprise Edition S2C VPN to Connect an On-premises Data Center to a VPC.
  • The security groups of the primary and standby instances in a RegionlessDB cluster must allow the IP address and port of the peer end. For details, see Configuring Security Group Rules.

Creation

  • If you have created proxy instances or HTAP instances for a TaurusDB instance, the TaurusDB instance cannot be used as an instance in a RegionlessDB cluster. To use it, delete the proxy instances or HTAP instances first.
  • When a standby instance is created, data needs to be synchronized from the primary instance. The time required depends on how much data there is.

Backup and restoration

The primary instance in a RegionlessDB cluster cannot be restored to the original instance, and other instances cannot be restored to any instance in a RegionlessDB cluster.

APIs

RegionlessDB clusters do not support APIs.

Other

In large-scale DDL scenarios, the replication latency may fluctuate for more than 1 second.

Table 4 Operations not supported by the primary and standby instances

Instance

Unsupported Operation

Primary instance

The primary instance does not support the following operations:
  • Changing a database port
  • Changing a private IP address
  • Creating an HTAP instance
  • Creating a proxy instance

Standby instance

The standby instance does not support the following operations:
  • Resetting a password
  • Creating and restoring a backup
  • Creating an account
  • Authorizing an account
  • Creating a proxy instance
  • Creating an HTAP instance
  • Promoting a read replica to the primary node
  • Changing a database port
  • Changing a private IP address
  • Configuring an auto scaling policy