Updated on 2023-03-30 GMT+08:00

Parallel Data Import

Principles

Importing data in parallel on multiple nodes fully uses the computing and I/O capabilities of the nodes to maximize speed. The parallel data import function of GaussDB(DWS) implements high-speed and parallel import of external data in a specified format (CSV or TEXT).

Parallel data import is more efficient than the traditional data import method in which the INSERT statement is used to insert data. The procedure for importing data in parallel is as follows:
  • The CN only plans and delivers data import tasks, and the DNs execute these tasks. This reduces CN resource usage, enabling the CN to process external requests.
  • The computing capability and network bandwidth of all the DNs are fully utilized, improving data import performance.
The following uses the Hash distribution policy as an example to describe the GaussDB(DWS) data import process. Figure 1 shows the parallel data import process.
Figure 1 Parallel data import
Table 1 Procedure description

Process

Description

Creating a table that complies with the Hash distribution policy

When running the CREATE TABLE statement, a service application presets the Hash distribution policy (specifies an attribute of a table as a distribution field).

Setting the partitioning policy

When executing the CREATE TABLE statement, a service application presets a partitioning rule (specifies an attribute of a table as a partitioning field). All Hash data in each DN is partitioned based on the preset partitioning rule.

During data import, GDS splits a specified data file into data blocks with a fixed size.

DNs download these data blocks from GDS in parallel.

Each DN processes data blocks in parallel and parses out a data tuple from the data blocks. The physical location of each tuple is determined based on the Hash value obtained based on the distribution column.

  • If data is distributed on remote nodes based on the Hash values, you need to redistribute it to target DNs.
  • If data is distributed on local nodes based on the Hash values, store it on local DNs.

Writing data into partitions

After data is sent to the node where Hash is used, it is written into the partition data file based on the partitioning logic.

While data is written into a partitioned table in GaussDB(DWS), you can exchange partitions to improve the writing performance.

General Data Service (GDS): Multiple GDSs can be deployed on a data server to improve the import performance.