Updated on 2025-05-29 GMT+08:00

Table-level Import and Export

Many tools are available for table-level import and export. You can select a proper tool based on the following scenarios:

  1. If you need to export the definition and data of a single table to the same file, you are advised to use the plain-text archive of the gs_dump tool and the -t parameter. You can use multiple -t parameters to back up multiple tables. For details about how to use the gs_dump tool, see "Data Import and Export Tools > gs_dump for Exporting Database Information" in Tool Reference.

    If the exported table depends on objects that are not exported, an error message may be displayed indicating that the dependent objects are missing when the table is imported. Therefore, ensure that the dependent objects have been created before importing the table.

    You are advised to run the following command as the initial user or a user with the SYSADMIN permission to back up data. The source database is my_database, and the target table is my_table in my_schema.
    nohup gs_dump my_database -U root -W ******** -p 8000 -F p -f /data/backup/my_table_backup.sql -t my_schema.my_table > /data/backup/my_table_backup.log &
    Before restoration, create a target database with the same attributes as the source database, and ensure that the target schema exists in the database and no target table exists. Then, run the following command as the initial user or a user with SYSADMIN permissions to restore the database:
    nohup gsql -d my_database2 -p 8000 -U root -W ******** -f /data/backup/my_table_backup.sql -a > /data/backup/my_table_restore.log &
  2. Only the definition of a single table needs to be exported and no data in the table is required.

    If the exported table depends on objects that are not exported, an error message may be displayed indicating that the dependent objects are missing when the table is imported. Therefore, ensure that the dependent objects have been created before importing the table.

    You are advised to use the plain-text archive of the gs_dump tool together with the -s parameter. The command is as follows:
    nohup gs_dump my_database -U root -W ******** -p 8000 -F p -f /data/backup/my_table_backup.sql -t my_schema.my_table -s > /data/backup/my_table_backup.log &
    Before restoration, create a target database with the same attributes as the source database, and ensure that the target schema exists in the database and no target table exists. Then, run the following command as the initial user or a user with SYSADMIN permissions to restore the database:
    nohup gsql -d my_database2 -p 8000 -U root -W ******** -f /data/backup/my_table_backup.sql -a > /data/backup/my_table_restore.log &
  3. If you are used to using Oracle SQLLDR or need to save the data import logs (such as import result, discarded data, and error data) to the client, you can use gs_loader.

    gs_loader supports importing data files in CSV, TEXT, and FIXED formats. CSV is recommended for data import. For details about how to use the gs_loader tool, see "Data Import and Export Tools > gs_loader for Importing Data" in Tool Reference.

    • The data files in the three formats must use "\n" or "\r\n" as the line terminator, and the line terminators of the entire file must be the same (either "\n" or "\r\n").
    • In TEXT format, gs_loader cannot identify escaped special characters (for example, '\n', which is still '\n' after import and will not be escaped to 0x0A), compared with COPY. By default, the TEXT file exported using COPY is escaped. If gs_loader is used to import the file, the escaped characters cannot be converted back, causing inconsistency. Therefore, gs_loader can import only the TEXT files that are exported by COPY and are not escaped (using WITHOUT ESCAPING).
    The typical application scenarios are as follows:
    • The format of the upstream data file can be customized or determined as CSV.

      The CSV format is compatible with multiple platforms and is universal in the industry. Many data sources support the export of data files in CSV format. Unless the upstream data file format is not CSV, you need to select the import method based on the actual file format (TEXT or FIXED). In addition, CSV files are recommended. If the column data contains special characters (such as commas and newline characters), the data file can only be saved as CSV because the CSV format uses quote characters to enclose the data containing special characters to prevent the special characters in the field data from conflicting with separators and line terminators.

      When importing a standard CSV file, add the FIELDS CSV statement to the control file. The following is an example:

      -- Create a table.
      gaussdb=# create table test_loader(id int, name name);
      
      -- View the control file test.ctl.
      LOAD DATA
      TRUNCATE INTO TABLE test_loader
      FIELDS CSV
      FIELDS TERMINATED BY ','
      OPTIONALLY ENCLOSED BY '"'
      (
        id integer external,
        name char
      )
      
      -- View the data file test.csv.
      1,"aa
      a"
      2,"bb b"
      3,"cc,c"
      4,ddd
      
      -- Import data.
      gs_loader -p xxx host=xxx control=test.ctl data=test.csv -d testdb -W xxx
      
      -- The import is successful. View the import result.
      gaussdb=# select * from t1;
       id |   name    
      ----+-----------
        1 | aa       +
          | a
        2 | bb      b
        3 | cc,c
        4 | ddd
      (4 rows)
      • If the column quote characters are not standard double quotation marks, you can set it using the OPTIONALLY ENCLOSED BY clause.
      • If the column separator is not a standard comma (,), you can set it using the FIELDS TERMINATED BY clause.
    • The upstream data file is in TEXT format.

      In TEXT format, gs_loader cannot identify escaped special characters (for example, '\n', which is still '\n' after import and will not be escaped to 0x0A), compared with COPY. Therefore, data containing special characters such as delimiters and line terminators is not supported in TEXT format. Otherwise, the structure will be disordered.

      The following is an example of importing a valid data file in TEXT format:
      -- Create a table.
      gaussdb=# create table test_loader(id int, name name);
      
      -- View the control file test.ctl.
      LOAD DATA
      TRUNCATE INTO TABLE test_loader
      (
        id integer external,
        name char
      )
      
      -- View the data file test.dat.
      1 aaa
      2 bbb
      3 ccc
      
      -- Import data.
      gs_loader -p xxx host=xxx control=test.ctl data=test.dat -d testdb -W xxx
      
      -- The import is successful. View the import result.
      gaussdb=# select * from t1;
       id | name 
      ----+------
        1 | aaa
        2 | bbb
        3 | ccc
      (3 rows)

      If the column separator is not a standard horizontal tab character, you can set it using the FIELDS TERMINATED BY clause.

    • The upstream data file is in FIXED format.

      In FIXED format, each column has fixed length and are not separated by delimiters. Therefore, column data does not conflict with delimiters. However, if the column data contains newline characters, the data cannot be processed.

      An import example is as follows:

      -- Create a table.
      gaussdb=# create table test_loader(id int, name name);
      
      -- View the control file test.ctl.
      LOAD DATA
      TRUNCATE INTO TABLE test_loader
      (
        id POSITION(1:1) integer external,
        name POSITION(2:5) char
      )
      
      -- View the data file test.fixed.
      1aaaa
      2bb b
      3cc,c
      
      -- Import data.
      gs_loader -p xxx host=xxx control=test.ctl data=test.fixed -d testdb -W xxx
      
      -- The import is successful. View the import result.
      gaussdb=# select * from t1;
       id |   name    
      ----+-----------
        1 | aaaa
        2 | bb      b
        3 | cc,c
      (3 rows)