Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Situation Awareness
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
Software Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive
Help Center/ GaussDB(DWS)/ Best Practices/ Data Development/ Cutting Costs by Switching Between Cold and Hot Data Storage in GaussDB(DWS)

Cutting Costs by Switching Between Cold and Hot Data Storage in GaussDB(DWS)

Updated on 2024-11-08 GMT+08:00

Scenarios

In massive big data scenarios, with the growing of data, data storage and consumption increase rapidly. The need for data may vary in different time periods, therefore, data is managed in a hierarchical manner, improving data analysis performance and reducing service costs. In some data usage scenarios, data can be classified into hot data and cold data by accessing frequency.

Hot and cold data is classified based on the data access frequency and update frequency.

  • Hot data: Data that is frequently accessed and updated and requires fast response.
  • Cold data: Data that cannot be updated or is seldom accessed and does not require fast response

You can define cold and hot management tables to switch cold data that meets the specified rules to OBS for storage. Cold and hot data can be automatically determined and migrated by partition.

Figure 1 Hot and cold data management

When data is inserted to GaussDB(DWS) column-store tables, the data is first stored in hot partitions. As data accumulates, you can manually or automatically migrate the cold data to OBS for storage. The metadata, description tables, and indexes of the migrated cold data are stored locally to ensure the read performance.

The hot and cold partitions can be switched based on LMT (Last Modify Time) and HPN (Hot Partition Number) policies. LMT indicates that the switchover is performed based on the last update time of the partition, and HPN indicates that the switchover is performed based on the number of reserved hot partitions.

  • LMT: Switch the hot partition data that is not updated in the last [day] days to the OBS tablespace as cold partition data. [day] is an integer ranging from 0 to 36500, in days.

    In the following figure, day is set to 2, indicating that the partitions modified in the last two days are retained as the hot partitions, while the rest is retained as the cold partitions. Assume that the current time is April 30. The delete operation is performed on the partition [4-26] on April 30, and the insert operation is performed on the partition [4-27] on April 29. Therefore, partitions [4-26][4-27][4-29][4-30] are retained as hot partitions.

  • HPN: indicates the number of hot partitions to be reserved. The partitions are sequenced based on partition sequence IDs. The sequence ID of a partition is a built-in sequence number generated based on the partition boundary values and is not shown. For a range partition, a larger boundary value indicates a larger sequence ID. For a list partition, a larger maximum enumerated value of the partition boundary indicates a larger sequence ID. During the cold and hot switchover, data needs to be migrated to OBS. HPN is an integer ranging from 0 to 1600. If HPN is set to 0, hot partitions are not reserved. During a cold/hot switchover, all partitions with data are converted to cold partitions and stored on OBS.

    In the following figure, HPN is set to 3, indicating that the last three partitions with data are retained as the hot partitions with the rest as the cold partitions during hot and cold partition switchover.

Constraints

  • Supports DML operations on cold and hot tables, such as INSERT, COPY, DELETE, UPDATE, and SELECT.
  • Supports DCL operations such as permission management on cold and hot tables.
  • Supports ANALYZE, VACUUM, MERGE INTO, and PARTITION operations on cold and hot tables.
  • Supports common column-store partitioned tables to be upgraded to hot and cold data tables.
  • Supports upgrade, scale-out, scale-in, and redistribution operations on tables with cold and hot data management enabled.
  • 8.3.0 and later versions support mutual conversion between cold and hot partitions. Versions earlier than 8.3.0 support only conversion from hot data to cold data.
  • If a table has both cold and hot partitions, the query becomes slow because cold data is stored on OBS and the read/write speed are lower than those of local queries.
  • Currently, cold and hot tables support only column-store partitioned tables of version 2.0. Foreign tables do not support cold and hot partitions.
  • Only the cold and hot switchover policies can be modified. The tablespace of cold data in cold and hot tables cannot be modified.
  • Restrictions on partitioning cold and hot tables:
    • Data in cold partitions cannot be exchanged.
    • MERGE PARTITION supports only the merge of hot-hot partitions and cold-cold partitions.
    • Partition operations, such as ADD, MERGE, and SPLIT, cannot be performed on an OBS tablespace.
    • Tablespaces of cold and hot table partitions cannot be specified or modified during table creation.
  • Cold and hot data switchover is not performed immediately upon conditions are met. Data switchover is performed only after users manually, or through a scheduler, invoke the switchover command. Currently, the automatic scheduling time is 00:00 every day and can be modified.
  • Cold and hot data tables do not support physical fine-grained backup and restoration. Only hot data is backed up during physical backup. Cold data on OBS does not change. The backup and restoration does not support file deletion statements, such as TRUNCATE TABLE and DROP TABLE.

Creating a cluster

  1. Log in to the Huawei Cloud management console.
  2. Choose Service List > Analytics > Data Warehouse Service. On the page that is displayed, click Create Cluster in the upper right corner.
  3. Configure the parameters according to Table 1.

    Table 1 Software configuration

    Parameter

    Configuration

    Region

    Select the CN-Hong Kong region.

    NOTE:

    CN-Hong Kong is used as an example. You can select other regions as required. Ensure that all operations are performed in the same region.

    AZ

    AZ2

    Product

    Standard data warehouse

    CPU Architecture

    x86

    Node Flavor

    dws2.m6.4xlarge.8 (16 vCPUs | 128 GB | 2000 GB SSD)

    NOTE:

    If this flavor is sold out, select other AZs or flavors.

    Nodes

    3

    Cluster Name

    dws-demo

    Administrator Account

    dbadmin

    Administrator Password

    N/A

    Confirm Password

    N/A

    Database Port

    8000

    VPC

    vpc-default

    Subnet

    subnet-default(192.168.0.0/24)

    Security Group

    Automatic creation

    EIP

    Buy now

    Bandwidth

    1Mbit/s

    Advanced Settings

    Default

  4. Confirm the information, click Next, and then click Submit.
  5. Wait about 6 minutes. After the cluster is created, click next to the cluster name. On the displayed cluster information page, record the value of Public Network Address, for example, dws-demov.dws.huaweicloud.com.

Using the gsql CLI Client to Connect to a Cluster

  1. Remotely log in to the Linux server where gsql is to be installed as user root, and run the following command in the Linux command window to download the gsql client:

    1
    wget https://obs.ap-southeast-1.myhuaweicloud.com/dws/download/dws_client_8.1.x_redhat_x64.zip --no-check-certificate
    

  2. Decompress the client.

    1
    cd <Path_for_storing_the_client> unzip dws_client_8.1.x_redhat_x64.zip
    

    Where,

    • <Path_for_storing_the_client>: Replace it with the actual path.
    • dws_client_8.1.x_redhat_x64.zip: This is the client tool package name of RedHat x64. Replace it with the actual name.

  3. Configure the GaussDB(DWS) client.

    1
    source gsql_env.sh
    

    If the following information is displayed, the gsql client is successfully configured:

    1
    All things done.
    

  4. Use the gsql client to connect to a GaussDB(DWS) database (using the password you defined when creating the cluster).

    1
    gsql -d gaussdb -p 8000 -h 192.168.0.86 -U dbadmin -W password -r
    

    If the following information is displayed, the connection succeeded:

    1
    gaussdb=>
    

Creating Hot and Cold Tables

Create a column-store cold and hot data management table lifecycle_table and set the hot data validity period LMT to 100 days.
1
2
3
4
5
6
7
8
9
CREATE TABLE lifecycle_table(i int, val text) WITH (ORIENTATION = COLUMN, storage_policy = 'LMT:100')
PARTITION BY RANGE (i)
(
PARTITION P1 VALUES LESS THAN(5),
PARTITION P2 VALUES LESS THAN(10),
PARTITION P3 VALUES LESS THAN(15),
PARTITION P8 VALUES LESS THAN(MAXVALUE)
)
ENABLE ROW MOVEMENT;

Hot and Cold Data Switchover

Switch hot partition data to cold partition data.
  • Automatic switchover: The scheduler automatically triggers the switchover at 00:00 every day.

    You can use the pg_obs_cold_refresh_time(table_name, time) function to customize the automatic switchover time. For example, set the automatic triggering time to 06:30 every morning.

    1
    2
    3
    4
    5
    SELECT * FROM pg_obs_cold_refresh_time('lifecycle_table', '06:30:00');
    pg_obs_cold_refresh_time
    --------------------------
     SUCCESS
    (1 row)
    
  • Manual

    Run the ALTER TABLE statement to manually switch a single table.

    1
    2
    ALTER TABLE lifecycle_table refresh storage;
    ALTER TABLE
    

    Use the pg_refresh_storage() function to switch all hot and cold tables in batches.

    1
    2
    3
    4
    5
    SELECT pg_catalog.pg_refresh_storage();
     pg_refresh_storage
    --------------------
     (1,0)
    (1 row)
    

Convert cold partition data into hot partition data. This function is supported only in 8.3.0 or later.

  • Convert all cold partitions to hot partitions.
    1
    SELECT pg_catalog.reload_cold_partition('lifecycle_table');
    
  • Convert a specified cold partition to a hot partition:
    1
    SELECT pg_catalog.reload_cold_partition('lifecycle_table', 'cold_partition_name');
    

Viewing Data Distribution in Hot and Cold Tables

  • View the data distribution in a single table:
    1
    2
    3
    4
    5
    6
    7
    SELECT * FROM pg_catalog.pg_lifecycle_table_data_distribute('lifecycle_table');
    schemaname |    tablename    |   nodename   | hotpartition | coldpartition | switchablepartition | hotdatasize | colddatasize | switchabledatasize
    ------------+-----------------+--------------+--------------+---------------+---------------------+-------------+--------------+--------------------
     public     | lifecycle_table | dn_6001_6002 | p1,p2,p3,p8  |               |                     | 96 KB       | 0 bytes      | 0 bytes
     public     | lifecycle_table | dn_6003_6004 | p1,p2,p3,p8  |               |                     | 96 KB       | 0 bytes      | 0 bytes
     public     | lifecycle_table | dn_6005_6006 | p1,p2,p3,p8  |               |                     | 96 KB       | 0 bytes      | 0 bytes
    (3 rows)
    
  • View data distribution in all hot and cold tables:
    1
    2
    3
    4
    5
    6
    7
    SELECT * FROM pg_catalog.pg_lifecycle_node_data_distribute();
    schemaname |    tablename    |   nodename   | hotpartition | coldpartition | switchablepartition | hotdatasize | colddatasize | switchabledatasize
    ------------+-----------------+--------------+--------------+---------------+---------------------+-------------+--------------+--------------------
     public     | lifecycle_table | dn_6001_6002 | p1,p2,p3,p8  |               |                     |       98304 |            0 |                  0
     public     | lifecycle_table | dn_6003_6004 | p1,p2,p3,p8  |               |                     |       98304 |            0 |                  0
     public     | lifecycle_table | dn_6005_6006 | p1,p2,p3,p8  |               |                     |       98304 |            0 |                  0
    (3 rows)
    

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback