Relational Database ServiceRelational Database Service

Elastic Cloud Server
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
Domain Name Service
VPC Endpoint
Cloud Connect
Enterprise Switch
Security & Compliance
Web Application Firewall
Host Security Service
Data Encryption Workshop
Database Security Service
Advanced Anti-DDoS
Data Security Center
Container Guard Service
Situation Awareness
Managed Threat Detection
Cloud Certificate Manager
Anti-DDoS Service
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GaussDB(for MySQL)
Distributed Database Middleware
GaussDB(for openGauss)
Developer Services
Distributed Cache Service
Simple Message Notification
Application Performance Management
Application Operations Management
API Gateway
Cloud Performance Test Service
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Cloud Communications
Message & SMS
Cloud Ecosystem
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP License Service
Support Plans
Customer Operation Capabilities
Partner Support Plans
Professional Services
Intelligent EdgeFabric
SDK Developer Guide
API Request Signing Guide
Koo Command Line Interface
Updated at: Apr 02, 2022 GMT+08:00

Instance Usage Specifications

Database Connection

RDS for PostgreSQL uses a process architecture, providing a backend service process for each client connection.

  • Set max_connections depending on the type of your application. Use the parameter settings provided on pgtune as examples:
    • Set max_connections to 200 for web applications.
    • Set max_connections to 300 for OLTP applications.
    • Set max_connections to 40 for data warehouses.
    • Set max_connections to 20 for desktop applications.
    • Set max_connections to 100 for hybrid applications.
  • Limit the maximum number of connections allowed for a single user based on workload requirements.
  • Set the number of active connections to two to three times the number of vCPUs.
  • Avoid long transactions, which may block autovacuum and affect database performance.
  • Release persistent connections periodically because maintaining persistent connections may generate large cache and cause high memory consumption.
  • Check the application framework to prevent the application from automatically starting transactions without performing any operations.

Read Replicas

  • Avoid long transactions, which may cause query conflicts and affect playback.
  • Configure hot_standby_feedback for instances requiring real-time data and set max_standby_streaming_delay to a proper value.
  • Monitor long transactions, long connections, and replication delay and handle problems in a timely manner.
  • Ensure that applications connected to a read replica can be switched to other nodes because read replicas are single-node instances incapable of providing high availability.

Reliability and Availability

  • Select primary/standby DB instances for production databases.
  • Keep storage usage less than 70% for production databases to prevent problems caused by full storage.
  • Deploy primary and standby instances in different AZs to improve availability.
  • Set the time window for automated backup to off-peak hours. Do not disable full backup.
  • Set asynchronous replication between primary and standby DB instances to prevent workloads on the primary instance from being blocked due to a fault on the standby instance.


  • Commit or roll back two-phase transactions in a timely manner to prevent database bloat.
  • Delete the replication slots that are no longer used for logical replication to prevent database bloat.
  • Change the table structure during off-peak hours, for example, adding fields or indexes.
  • To create indexes during peak hours, use the CONCURRENTLY syntax to avoid blocking the DML of the table.
  • Before modifying the structure of a table during peak hours, perform a verification test to prevent the table from being rewritten.
  • Configure a lock wait timeout duration for DDL operations to avoid blocking operations on related tables.
  • Partition your database if its capacity exceeds 2 TB.
  • If a frequently accessed table contains more than 20 million records or its size exceeds 10 GB, split the table or create partitions.
  • Ensure that the number of tables in a single instance do not exceed 20,000, and the number of tables in a single database do not exceed 4,000.

Routine O&M

  • Periodically download and view slow query logs on the Logs page to find out and resolve performance problems in a timely manner.
  • Periodically check the resource usage of your database. If the resources are insufficient, scale up your instance specifications in a timely manner.
  • Monitor and pay attention to the age of your database. If the database is too old, handle it in a timely manner to prevent the database from being unavailable due to transaction ID wraparound.
  • Run the SELECT statement before deleting or modifying a record.
  • After a large amount of data is deleted or updated in a table, run VACUUM on the table.
  • Pay attention to the number of available replication slots and ensure that at least one replication slot is available for database backup.
  • Clear the replication slots that are no longer used to prevent the replication slots from blocking log reclaiming.


  • Prevent your database from being accessed from the Internet. If you want to allow the access from the Internet, bind an EIP to your DB instance and configure a whitelist.
  • Use SSL to connect to your DB instance.

Did you find this page helpful?

Failed to submit the feedback. Please try again later.

Which of the following issues have you encountered?

Please complete at least one feedback item.

Content most length 200 character

Content is empty.

OK Cancel