Help Center> GaussDB(DWS)> Getting Started> Getting Started with Common Practices
Updated on 2023-08-23 GMT+08:00

Getting Started with Common Practices

After creating a cluster, you can use the practice cases provided by GaussDB(DWS) for your services.

Table 1 Best practices



Data Import and Export

Importing Data from OBS to a Cluster

This practice demonstrates how to upload sample data to OBS and import OBS data to the target table on GaussDB(DWS), helping you quickly learn how to import data from OBS to a GaussDB(DWS) cluster.

You can import data in TXT, CSV, ORC, PARQUET, CARBONDATA, or JSON format from OBS to a GaussDB(DWS) cluster for query.

Using GDS to Import Data from a Remote Server

This practice demonstrates how to use General Data Service (GDS) to import data from a remote server to GaussDB(DWS).

GaussDB(DWS) allows you to import data in TXT, CSV, or FIXED format.

Importing Remote GaussDB(DWS) Data Sources

In the era of big data convergent analysis, GaussDB(DWS) clusters in the same region can communicate with each other. This practice demonstrates how to import data from a remote GaussDB(DWS) cluster to the local GaussDB(DWS) cluster using foreign tables.

The demonstration procedure is as follows: Install the gsql database client on an ECS, connect to GaussDB(DWS) using gsql, and import data from the remote GaussDB(DWS) using a foreign table.

Exporting ORC Data to MRS

GaussDB(DWS) allows you to export ORC data to MRS using an HDFS foreign table. You can specify the export mode and export data format in the foreign table. Data is exported from GaussDB(DWS) in parallel using multiple DNs and stored in HDFS. In this way, the overall export performance is improved.

Data Migration

Migrating Data From Oracle to GaussDB(DWS)

This tutorial demonstrates how to migrate Oracle data to GaussDB(DWS).

Synchronizing MySQL Table Data to GaussDB(DWS) in Real Time

This practice demonstrates how to use DRS to synchronize MySQL data to GaussDB (DWS) in real time.

Using DLI Flink Jobs to Write Kafka Data to DWS in Real Time

This practice demonstrates how to use DLI Flink jobs to synchronize consumption data from Kafka to GaussDB(DWS) in real time.

This practice takes about 90 minutes. The cloud services used in this practice include Virtual Private Cloud (VPC) and subnets, Elastic Cloud Server (ECS), Object Storage Service (OBS), Distributed Message Service (DMS) for Kafka, Data Lake Insight (DLI), and Data Warehouse Service (DWS)

Optimization Table

Table Optimization Practices

In this practice, you will learn how to optimize the design of your tables. You will start by creating tables without specifying their storage mode, distribution key, distribution mode, or compression mode. Load test data into these tables and test system performance. Then, follow excellent practices to create the tables again using new storage modes, distribution keys, distribution modes, and compression modes. Load the test data and test performance again. Compare the two test results to find out how table design affects the storage space, and the loading and query performance of the tables.

Estimated time: 60 minutes

Advanced Features

Best Practices of Hot and Cold Data Management

In massive big data scenarios, with the growing of data, data storage and consumption increase rapidly. The need for data may vary in different time periods, therefore, data is managed in a hierarchical manner, improving data analysis performance and reducing service costs. In some data usage scenarios, data can be classified into hot data and cold data by accessing frequency.

Best Practices for Automatic Partition Management

For partition tables whose partition columns are time, the automatic partition management function can be added to automatically create partitions and delete expired partitions, reducing partition table maintenance costs and improving query performance. To facilitate data query and maintenance, the time column is often used as the partition column of a partitioned table that stores time-related data, such as e-commerce order information and real-time IoT data. When the time-related data is imported to a partitioned table, the table should have partitions of the corresponding time ranges. Common partition tables do not automatically create new partitions or delete expired partitions. Therefore, maintenance personnel need to periodically create new partitions and delete expired partitions, leading to increased O&M costs.

Addressing this, GaussDB(DWS) introduces the automatic partition management feature. You can set the table-level parameters period and ttl to enable the automatic partition management function, which automatically creates partitions and deletes expired partitions, reducing partition table maintenance costs and improving query performance.

Database Management

Best Practices of Resource Management

This practice demonstrates how to use GaussDB(DWS) for resource management, helping enterprises eliminate bottlenecks in concurrent query performance. SQL jobs can run smoothly without affecting each other and consume less resources than before.

Excellent Practices for SQL Queries

Based on a large number of SQL execution mechanisms and practices, we can optimize SQL statements following certain rules to more quickly execute SQL statements and obtain correct results.

Excellent Practices for Data Skew Queries

This practice includes the following storage skew cases:

  • Real-Time Detection of Storage Skew During Data Import
  • Quickly Locating the Tables That Cause Data Skew

Best Practices of User Management

A GaussDB(DWS) cluster mainly consists of system administrators and common users. This practice describes the permissions of system administrators and common users and describes how to create users and query user information.

Viewing Table and Database Information

This practice demonstrates some basic database query cases:

  • Querying Table Information
  • Querying the Table Size
  • Querying Database Information
  • Querying the Database Size

Sample Data Analysis

Checkpoint Vehicle Analysis

This practice demonstrates the analysis of passed vehicles at traffic checkpoints. In this practice, 890 million pieces of data from traffic checkpoints are loaded to a single database table on GaussDB(DWS) for performing accurate query and fuzzy query. It is an example of high-performance query of historical data on GaussDB(DWS).

Supply Chain Requirement Analysis (TPC-H Data Set)

This practice describes how to load the sample data set from OBS to a data warehouse cluster and perform data queries. This example comprises multi-table analysis and theme analysis in the data analysis scenario.

Operations Status Analysis of a Retail Department Store

In this practice, the daily business data of each retail store is loaded from OBS to the corresponding table in the data warehouse cluster for summarizing and querying KPIs. This data includes store turnover, customer flow, monthly sales ranking, monthly customer flow conversion rate, monthly price-rent ratio, and sales per unit area. This example demonstrates the multidimensional query and analysis of GaussDB(DWS) in the retail scenario.

Data Security

Encrypting and Decrypting Data Columns

Data encryption is widely used in information systems to prevent unauthorized access and data leakage. As the core of the information system, the GaussDB(DWS) data warehouse also provides data encryption functions, including transparent encryption and encryption using SQL functions. This section describes SQL function encryption.