Help Center/ Object Storage Service/ User Guide/ Data Management/ Data Replication/ Cross-Region Replication Across Accounts
Updated on 2025-08-26 GMT+08:00

Cross-Region Replication Across Accounts

Scenarios

Cross-region replication across accounts is to replicate data from a source bucket in one region of an account to a destination bucket in another region of another account.

Replication scope: files, folders, object lists, objects with specified prefix, or specified URL lists

Replicable object data: object name, metadata (object content, size, last modification time, creator, version number, and user-defined metadata), ACL (supported by obsutil), and storage class

Figure 1 Cross-region replication across accounts

Cross-region replication across accounts applies to the following scenarios:

  • Regulatory compliance

    OBS stores data across AZs that are relatively far apart from each other, but regulatory compliance may require further distances. Cross-region replication enables you to meet regulatory requirements.

  • Low latency

    The same OBS resources may need to be accessed from different locations. To minimize the access latency, you can use cross-region replication to create object copies in the nearest region.

  • Data replication

    You want to migrate data stored in OBS to the data center in another region.

  • Data backup and disaster recovery

    You want to create explicit backups for all data written to OBS in the data center of another region to ensure data remains available if there is any damage.

  • Ease of maintenance

    You have compute clusters used to analyze the same group of objects in two different OBS regions and may need to maintain object copies in these two regions.

OBS helps you replicate your service data stored in OBS to a specified region, but Huawei Cloud has no access to your data. You need to ensure the legal compliance of your use of OBS on your own. If your replication involves cross-border transfer, ensure that your use complies with relevant laws and regulations.

Constraints

The following are constraints on migrating data using Object Storage Migration Service (OMS):

Table 1 OMS constraints

Item

Description

Objects with multiple versions

By default, OMS migrates only the latest version of objects in source buckets.

Storage class of destination buckets

The storage class of destination buckets can be Standard, Infrequent Access, Archive, or Deep Archive.

Migration network

Migrations over the Internet or intranets are supported. Migrations over private lines are not supported.

Metadata migration

Only Chinese characters, English characters, digits, and hyphens (-) can be migrated. Other characters cannot be migrated.

Chinese punctuation marks cannot be URL-encoded during the migration. If metadata contains Chinese punctuation marks, the metadata and the corresponding object will fail to be migrated.
  • Chinese characters are URL-encoded during the migration.
  • English characters, digits, and hyphens (-) are directly migrated without being encoded.

Migration scope

A single migration task or migration task group can only migrate data of one bucket. If data of multiple buckets needs to be migrated, you need to create multiple tasks or task groups.

Symbolic links

Symbolic link files cannot be migrated. Symbolic link files will be recorded as failed, and the migration task will also be marked as failed. Other files can be migrated normally. If the source contains symbolic links, you need to enter the actual file path.

Migration of object ACLs

OMS cannot migrate object ACLs.

Migration speed

Generally, OMS can migrate 10 TB to 20 TB of data per day. For higher migration efficiency, you are advised to use storage migration workflows on MgC. MgC allows you to migrate data using dedicated, scalable migration clusters and up to 20 Gbit/s of bandwidth.

However, the speed depends on the number and size of source objects, bandwidth, and transmission distance over the Internet between the source and destination buckets. You are advised to create a migration task to test the migration speed. The maximum migration speed is five times the average speed of a single task because up to five tasks can be executed concurrently in a region by default. If you need to define a higher number of concurrent tasks, you can create a storage migration workflow on MgC.

Archived data

You need to restore archived data before the migration. Note that when there is archived data to be migrated, you need to:

  • Create migration tasks after the restoration is complete.
  • Configure a validity period for restored data based on the total amount of data to be migrated. This helps prevent migration failures because restored data becomes archived again during the migration.
  • Pay your source cloud vendor for restoring archived data. To learn about the pricing details, contact your source cloud vendor.

Migration tasks

A maximum of five concurrent migration tasks are allowed for your account per region.

NOTE:

If your destination regions are CN North-Beijing1 and CN South-Guangzhou, you can run up to 10 migration tasks concurrently.

A maximum of 1,000,000 migration tasks can be created in your account per region within a 24-hour period.

Migration Task Groups

A maximum of five concurrent migration task groups are allowed for your account per region.

NOTE:

If your destination regions are CN North-Beijing1 and CN South-Guangzhou, you can run up to 10 migration task groups concurrently.

Synchronization tasks

Synchronization tasks share quotas with migration tasks and migration task groups, but enjoy a higher priority.

A maximum of five concurrent synchronization tasks are allowed for your account per region.

Object list files

  • An object list file cannot exceed 1,024 MB.
  • An object list file must be a .txt file, and its metadata ContentType must be text/plain.
  • An object list file must be in UTF-8 without BOM.
  • Each line in an object list file can contain only one object name, and the object name must be URL-encoded.
  • Spaces are not allowed in each line in an object list file. Spaces may cause migration failures because they may be mistakenly identified as object names.
  • Each line in an object list file cannot be longer than 65,535 characters, or the migration will fail.
  • The ContentEncoding metadata of an object list file must be left empty, or the migration will fail.

URL list files

  • A URL list file cannot exceed 1,024 MB.
  • A URL list file must be a .txt file, and its metadata ContentType must be text/plain.
  • A URL list file must be in UTF-8 without BOM.
  • Each line in a URL list file can contain only one URL and one destination object name.
  • Each line in a URL list file cannot be longer than 65,535 characters, or the migration will fail.
  • The ContentEncoding metadata of a URL list file must be left empty, or the migration will fail.
  • Spaces are not allowed in each line in a URL list file. Spaces may cause migration failures because they may be mistakenly identified as object names.
  • In a URL list file, you must use a tab character (\t) to separate the URL and destination object name in each line. The format is [URL][Tab character][Destination object name]. Chinese and special characters in the source and destination object names must be URL-encoded.
    For example:
    http://xxx.xxx.xxx.xxx.com/doc/%e6%96%87%e4%bb%b61.txt doc/%e6%96%87%e4%bb%b61.txt http://xxx.xxx.xxx.xxx.com/doc/thefile2.txt doc/thefile2.txt http://xxx.xxx.xxx.xxx.com/the%20file.txt the%20file.txt 
    http://xxx.xxx.xxx.xxx.com/the%20file2.txt the+file2.txt 
    http://xxx.xxx.xxx.xxx.com/doc/thefile.txt doc/thefile.txt

    In the preceding example, after the files represented by the URLs are copied to the destination bucket, the objects are named doc/File 1.txt, doc/thefile2.txt, the file.txt, the file2.txt, and doc/thefile.txt.

    The URL encoding must start from the second character after the domain name in a line. Do not encode the protocol header, domain name, or slash before or after the domain name. Otherwise, the format verification will fail.

    In each line, use a tab character (Tab key on the keyboard) to separate the URL and the destination object name. Do not use spaces.

  • URLs in a list file can be accessed using HEAD and GET methods.

Failed object list files

A maximum of 100,000 failed objects can be recorded in a list file.

NOTE:

If more than 100,000 objects fail to be migrated in a migration task, you are advised to rectify the fault based on the existing failed object list and perform the migration again.

Billing for Cross-Region Replication Across Accounts

  • When you use the OMS console or APIs to migrate data, OBS APIs of the source and destination ends are invoked to upload and download data. You will be billed for the API requests and data download traffic. For details, see OMS Billing. In addition, you will be billed for storing the objects in the destination bucket. For details, see Storage Costs.
  • When you use obsutil to replicate data across regions, you will be billed for requests, traffic, and storage. For details, see Table 2.
    Table 2 Billing for cross-region replication

    Action

    Billing Item

    Description

    Replicate data across regions

    Requests

    You are billed for the number of successfully replicated objects. Successfully replicating one object creates a copy request. For details, see Copying Objects.

    To learn about the request billing, see Requests.

    Data transfer

    You are billed for the amount of data transferred from one region to another.

    If objects are encrypted using server-side encryption, the cost of their cross-region replication traffic is calculated based on the length of the plaintext for SSE-KMS and SSE-OBS.

    Storage space

    Storage space occupied by objects stored in the destination bucket.

    If you have specified another storage class for object copies in the destination bucket, these copies are billed based on the new storage class.

    If objects are encrypted using server-side encryption, their storage cost is calculated based on the length of the ciphertext.

    Synchronize existing objects

    Requests

    You are billed for the number of existing objects that are successfully replicated to the destination bucket.

    With synchronization of existing objects enabled, OBS synchronously replicates the objects that already exist in the bucket before a cross-region replication rule is created to the destination bucket.

    Data transfer

    You are billed for the traffic generated when OBS synchronizes objects across regions.

    If historical objects are encrypted using server-side encryption, the cost of their cross-region replication traffic is calculated based on the length of the plaintext for SSE-KMS and SSE-OBS.

    Storage space

    Storage space occupied by objects stored in the destination bucket.

    If you have specified another storage class for object copies in the destination bucket, these copies are billed based on the new storage class.

    If historical objects are encrypted using server-side encryption, their storage cost is calculated based on the length of the ciphertext.

Prerequisites

  • A source bucket has been created in a region of an account.
  • A destination bucket has been created in another region of another account. To create a bucket, see Creating a Bucket.
  • When obsutil is used to replicate objects across accounts and regions, the version of the source bucket is 3.0 or later, and cross-region replication is available in the region of the source bucket. For details about the support for cross-region replication in each region, search for "cross-region replication" on the Function Overview page.

Cross-Region Replication Across Accounts

You can use the OMS console or APIs to migrate data across accounts and regions, or use obsutil to replicate data across accounts and regions.

OMS does not automatically migrate data. That is, data changes in the source bucket will not be automatically synchronized to the destination bucket. After data in the source bucket is modified, you need to execute the migration task again to synchronize the incremental data in the source bucket to the destination bucket.