gs_dump for Exporting Database Information
Description
gs_dump, provided by GaussDB, is used to export database information. You can export a database or its objects (such as schemas, tables, and views). The database can be the default postgres database or a user-specified database.
- gs_dump supports SSL encrypted communication.
- gs_dump can be used to export PDB data. When exporting a PDB, you can export a PDB or its objects (such as schemas, tables, and views).
- When gs_dump is used to export data, other users can still access (read or write) the database.
- gs_dump can export complete, consistent data. For example, if gs_dump is started to export database A at T1, data of the database at that time point will be exported, and modifications on the database after that time point will not be exported.
- gs_dump supports the export of text files that are compatible with the V1 database.
- gs_dump can export database information to a plain-text SQL script file or archive file.
- Plain-text SQL script: It contains the SQL statements required to restore the database. You can use gsql for Connecting to a Database to execute the SQL script. With only a little modification, the SQL script can rebuild a database on other hosts or database products.
- Archive file: It contains data required to restore the database. It can be a tar-, directory-, or custom-format archive. For details, see Table 1.
- gs_dump supports four export file formats, which are specified by [-F or --format=]. For details, see Table 1.
Format |
Value of -F |
Description |
Suggestion |
Import Tool |
---|---|---|---|---|
Plain-text |
p |
A plain-text script file containing SQL statements and commands. The commands can be executed on gsql, a command line terminal, to rebuild database objects and load table data. |
For a small-sized database or the exported SQL file needs to be modified, the plain-text format is recommended. |
Before using gsql for Connecting to a Database to restore database objects, you can use a text editor to edit the plain-text export file as required. |
Custom |
c |
A binary file that allows the restoration of all or selected database objects from an exported file. |
You are advised to use custom-format archive files for medium or large database. |
You can use gs_restore to import database objects from a custom-, directory-, or tar-format archive. |
Directory |
d |
A directory containing directory files of database objects and the data files of tables and BLOBs. |
Database objects and data files need to be stored and exported in different directories. Therefore, the directory format is recommended. |
|
.tar archive |
t |
A tar-format archive that allows the restoration of all or selected database objects from an exported file. The tar file cannot be further compressed and has an 8-GB limitation on the size of each single file. |
You need to export the archive result and pack it. The tar format is recommended. |

- To reduce the size of an exported file, you can use the gs_dump tool to compress it to a directory archive file or custom-format file. When a directory archive or custom-format archive is generated, medium compression is applied by default. Archived exported files cannot be compressed using gs_dump.
- In M-compatible mode, do not import or export data between instances with different lower_case_table_names parameter settings. Otherwise, data may be lost.
Precautions
- Before using gs_dump, ensure that the gs_dump version is consistent with the database version. gs_dump of a later version may not be fully compatible with kernel data of an earlier version.
- gs_dump is not suitable for scenarios where there are too many objects (such as tables, views, and indexes) in the database. If the number of objects in the database exceeds 100,000 or the dependency between objects is too complex, the export using gs_dump takes a long time.
- The generated columns are not dumped when gs_dump is used.
- When gs_dump is used, the IMCV metadata (gs_imcv system catalog) created in the HTAP table is not dumped.
- Do not modify the files and contents exported using the -F c/d/t formats. Otherwise, the restoration may fail. For files exported using the -F p format, you can edit the exported files with caution if necessary.
- To ensure the data consistency and integrity, gs_dump acquires a share lock on a table to be dumped. If a share lock has been set for the table in other transactions, gs_dump locks the table after it is released. If the table cannot be locked within the specified time, the dump fails. You can customize the timeout interval to wait for lock release by specifying the --lock-wait-timeout parameter.
- Stored procedures and functions cannot be exported in encrypted mode.
- For materialized views, this tool supports only definition export. After importing materialized views, you need to manually run the REFRESH statement to restore data.
- For temporary objects, this tool can export only global temporary tables.
- This tool cannot be used on standby nodes.
- When gs_dump is used to export partitioned indexes, the attributes of some index partitions cannot be exported, for example, the unusable status of index partitions. You can query the PG_PARTITION system catalog or ADM_IND_PARTITIONS or ADM_IND_SUBPARTITIONS view to obtain the attributes of an index partition. You can run the ALTER INDEX statement to manually set the attributes of an index partition.
- For scheduled tasks, this tool can export only scheduled tasks created using CREATE EVENT from a B-compatible database. Scheduled tasks created using advanced packages cannot be exported in this case. Only the initial user can export all scheduled tasks. Users with the SYSADMIN permission can export scheduled tasks owned by themselves. Common users without related permissions cannot export scheduled tasks.
- gs_dump does not export user-defined tokenweight dictionaries. You can manually create the corresponding tokenweight dictionary according to the dictionary information displayed in the error message "WARNING: dictionary xx cannot be automatically exported, please create it manually."
- In the multi-tenancy scenario, when gs_dump is used to export data, the template PDB cannot be exported, and closed PDBs cannot be exported.
- In the multi-tenancy scenarios, when a common user uses gs_dump to export data, only the database objects and data on which the user has permission can be exported.
- If the multi-tenancy function is disabled, gs_dump cannot export PDBs and objects in PDBs.
- If a table created by the initial user exists in the database and the table contains expression indexes of UDFs, after using gs_dump to export the table, system administrators must import the table as the initial user when using gsql or gs_restore to import the table. Otherwise, the indexes fail to be created due to security reasons.
- Common users cannot export directories and synonyms. If a common user attempts to export related data, the message "WARNING: xx not dumped because current user is not a superuser" is displayed.
- If a scheduled task created using the CREATE EVENT syntax cannot be exported, check whether job_style of the scheduled task in the gs_job_attribute table is EVENT. If not, create the scheduled task again.
- If local data exists in template1 of the source database instance, restore the output of gs_dump to an empty database (copy from template0 instead of template1). Otherwise, an error may occur because the definition of the added object is copied. To create an empty database without any local additions, copy data from template0 rather than template1. For example:
CREATE DATABASE foo WITH TEMPLATE template0;
- The size of a single tar file must be smaller than 8 GB. (This is the tar file format limitations.) The total size of a .tar archive and any of the other output formats are not limited, except possibly by the OS.
- The dump file generated by gs_dump does not contain the statistics used by the optimizer to make execution plans. Therefore, you are advised to run ANALYZE after restoring from a dump file to ensure optimal performance. The dump file does not contain the ALTER DATABASE... SET command, database users, and other installation settings. These settings are dumped by gs_dumpall.
- The current design of gs_dump is incompatible with separation of duties. In separation of duties, gs_dump can be used only by the initial user. The file to be imported must also be executed by the initial user.
- gs_dump does not export invalid functions. Before exporting functions, ensure that the functions are valid. If a function is invalid, an alarm is generated, indicating that the function corresponding to the OID is invalid.
- If the parameter type of a non-PACKAGE function is the table column type (table.column%type) or view column type (view.column%type) and the column type is a basic type, the parameter type is converted to the corresponding basic type in the exported function definition. Changing the column type of a table by executing ALTER TABLE MODIFY COLUMN and then running ALTER FUNCTION COMPILE, or changing the column type of a view query table, rebuilding the view, and executing ALTER FUNCTION COMPILE does not change the parameter type in the PROARGSRC column of the function in the PG_PROC system catalog. In the exported function definition, the parameter type remains the basic type before the object column type is changed.
- For a view that depends on functions in a PACKAGE, deleting the PACKAGE will cause a cascading deletion of the view. When the ddl_invalid_mode parameter is set for cascading invalidation, deleting the PACKAGE will result in an error during view export.
Format
gs_dump [OPTION]... [DBNAME]
Parameters
- DBNAME
Specifies the database to be connected. No short or long option is required. For example:
gs_dump -p port_number testdb -f dump1.sql
Or,
export PGDATABASE=testdb gs_dump -p port_number -f dump1.sql
- OPTION
There are three types of parameters: general parameters, dump parameters, and connection parameters.
Common parameters:
- -f, --file=FILENAME
Sends the output to the specified file or directory. If this parameter is omitted, the standard output is generated. If the output format is -F c, -F d, or -F t, the -f parameter must be specified. If the value of the -f parameter contains a directory, the current user must have the read and write permission on the directory.
- -F, --format=c|d|t|p
Selects an exported file format. The formats are as follows:
- c|custom: outputs a custom-format archive as a directory to be used as the input of gs_restore. This is the most flexible output format in which users can manually select it and reorder the archived items during the restoration process. An archive in this format is compressed by default.
- d|directory: creates a directory containing directory files and the data files of tables and BLOBs.
- t|tar: outputs a tar-format archive as the input of gs_restore. The .tar format is compatible with the directory format. Extracting a .tar archive generates a valid directory-format archive. However, the .tar archive cannot be further compressed and has an 8-GB limitation on the size of a single table. The order of table data items cannot be changed during restoration.
- p|plain: generates a text SQL script file. This is the default value.
- -v, --verbose
Specifies the verbose mode. If it is specified, gs_dump writes detailed object comments and number of startups/stops to the dump file, and progress messages to standard errors.
- -V, --version
- -Z, --compress=0-9
Specifies the used compression level.
Value range: 0 to 9
- 0 indicates no compression.
- 1 indicates the lowest compression ratio and the fastest processing speed.
- 9 indicates the highest compression ratio and the slowest processing speed.
For the custom-format archive, this option specifies the compression level of a single table data segment. By default, data is compressed at a medium level. The .tar archive format and plain-text format do not support compression currently.
- --lock-wait-timeout=TIMEOUT
Specifies the timeout period for waiting for a shared table lock when the dump starts. If no such lock is obtained in the specified period, the dump fails. The timeout interval can be specified in any of the formats accepted by SET statement_timeout.
- -?, --help
Displays help information about gs_dump command line parameters.
Dump parameters:
- -a, --data-only
Generates only the data, not the schema (data definition). Dumps the table data, LOBs, and sequence values.
- -b, --blobs
Reserved for function extension. The option is not recommended.
- -c, --clean
Before writing the command of creating database objects into the backup files, writes the command of cleaning (deleting) database objects to the backup files. If no objects exist in the target database, gsql or gs_restore probably displays some error information.
This parameter is used only for the plain-text format. For the archive format, you can specify the option when using gs_restore.
- -C, --create
The backup file content starts with the commands of creating the database and connecting to the created database. If the command script is executed in this mode, the database that is connected before the script is executed is not affected.
This parameter is used only for the plain-text format. For the archive format, you can specify the option when using gs_restore.
- In the multi-tenancy scenario, this option cannot be used when gs_dump is used to export a specified PDB.
- This option is not supported in M-compatible databases. You must create a database in M-compatible mode on the target instance, export data from the source instance, and connect to the newly created M-compatible database on the target instance to import data.
- -E, --encoding=ENCODING
Creates the dump in the specified character set encoding. By default, the dump file is created in the database encoding. Setting the environment variable PGCLIENTENCODING to the required dump encoding has the same function as this option.
If transcoding is required for the specified dump encoding and data in the table contains invalid characters, the error message "invalid byte sequence" will be displayed during export. You are advised to specify the -s parameter when using gs_dump to export only definitions and execute COPY with the encoding fault tolerance option separately to export and import data.
- -n, --schema=SCHEMA
Dumps only schemas matching the schema names. This option contains the schema and all its contained objects. If this option is not specified, all non-system schemas in the target database will be dumped. Multiple -n options can be transferred to select multiple modes. The schema parameter is interpreted as a pattern according to the same rule used by the \d command of gsql. Therefore, multiple schemas can also be selected by writing wildcard characters in the pattern. When you use wildcard characters, quote patterns to prevent the shell from expanding the wildcard characters.
- If -n is specified, gs_dump does not dump any other database objects that the selected schemas might depend upon. Therefore, there is no guarantee that the results of a specific-schema dump can be automatically restored to an empty database.
- If -n is specified, the non-schema objects are not dumped.
- In M-compatible mode, if a database with templatem is created by running CREATE DATABASE, data is exported by specifying db_name; if a database is created by running CREATE DATABASE db_name, data is exported by specifying -n because such database is equivalent to a schema.
- GaussDB automatically converts uppercase letters in object names to lowercase letters. If a schema name contains uppercase letters, add extra quotation marks, for example, -n '"Sch1"' or -n "\"Sch1\"".
- In M-compatible mode, the value of this parameter is affected by the GUC parameter lower_case_table_names. In case-sensitive mode (lower_case_table_names set to 0), the value of this parameter must be case-sensitive. If the value contains uppercase letters, add extra quotation marks. Otherwise, the result is the same as that in lowercase. In case-insensitive mode (lower_case_table_names set to 1), the value of this parameter must be in lowercase.
Multiple schemas can be dumped. For example, in the following example, both sch1 and sch2 are dumped.
gs_dump -h host_name -p port_number testdb -f backup/bkp_shl2.sql -n sch1 -n sch2
- -N, --exclude-schema=SCHEMA
Does not dump any tables matching the table pattern. The pattern is interpreted according to the same rule as for -n. -N can be specified multiple times to exclude schemas matching any of the specified patterns.
When both -n and -N are specified, the schemas that match at least one -n option but no -N is dumped. If -N is specified and -n is not, the schemas matching -N are excluded from what is normally dumped.
Dump allows you to exclude multiple schemas during dumping. For example, in the following example, sch1 and sch2 are excluded during the dump.
gs_dump -h host_name -p port_number testdb -f backup/bkp_shl2.sql -N sch1 -N sch2
GaussDB automatically converts uppercase letters in object names to lowercase letters. If a schema name contains uppercase letters, add extra quotation marks, for example, -N '"Sch1"' or -N "\"Sch1\"".
- -o, --oids
Dumps OIDs as parts of the data in each table. Use this option if your application references the OID columns in some way (for example, in a FOREIGN KEY constraint). If the preceding situation does not occur, do not use this parameter.
- -O, --no-owner
Does not output commands to set ownership of objects to match the original database. By default, gs_dump issues the ALTER OWNER or SET SESSION AUTHORIZATION command to set ownership of created database objects. These statements will fail when the script is running unless it is started by a system administrator (or the same user who owns all of the objects in the script). To make a script that can be used by any user for restoration and give the user ownership of all objects, specify -O.
This parameter is used only for the plain-text format. For the archive format, you can specify the option when using gs_restore.
- -s, --schema-only
- -S, --sysadmin=NAME
Reserved for function extension. The option is not recommended.
- -t, --table=TABLE
Specifies a list of tables, views, sequences, or foreign tables not to be dumped. You can use multiple -t parameters or wildcard characters to specify tables.
When you use wildcard characters, quote patterns to prevent the shell from expanding the wildcard characters.
The -n and -N options have no effect when -t is used, because tables selected by using -t are not affected the -n and -N options.
- The number of -t parameters must be less than or equal to 100.
- If the number of -t parameters is greater than 100, you are advised to use the --include-table-file parameter to replace some -t parameters.
- If -t is specified, gs_dump does not dump any other database objects that the selected tables might depend upon. Therefore, there is no guarantee that the results of a specific-table dump can be automatically restored to an empty database.
- -t tablename only dumps visible tables in the default search path. -t *.tablename dumps tablename tables in all the schemas of the dumped database. -t schema.table dumps tables in a specific schema.
- -t tablename does not export trigger information from a table.
- If the table name contains uppercase letters, you need to add \" to the table name when using the -t parameter to specify the export. To export the "abC" table, specify -t \"abC\". To export the schema."abC" table, specify -t schema.\"abC\".
- -t "" does not match any table.
- In M-compatible mode, the value of this parameter is affected by the GUC parameter lower_case_table_names. In case-sensitive mode (lower_case_table_names set to 0), the value of this parameter must be case-sensitive. If the value contains uppercase letters, add extra quotation marks. Otherwise, the result is the same as that in lowercase. In case-insensitive mode (lower_case_table_names set to 1), the value of this parameter must be in lowercase.
For example, in the following example, both schema1.table1 and schema2.table2 are dumped.
gs_dump -h host_name -p port_number testdb -f backup/bkp_shl2.sql -t schema1.table1 -t schema2.table2
- --include-table-file=FILENAME
- -T, --exclude-table=TABLE
Specifies a list of table, view, sequence, or foreign table objects not to be dumped. You can use multiple -T parameters or wildcard characters to specify multiple lists.
When both -t and -T are specified, it will dump the objects in the -t list, but not those in the -T list.
For example, in the following example, table1 and table2 are excluded during the dump.
gs_dump -h host_name -p port_number testdb -f backup/bkp_shl2.sql -T table1 -T table2
- --exclude-table-file=FILENAME
Specifies the table file not to be dumped.
Same as --include-table-file, the content format of this parameter is as follows:
schema1.table1
schema2.table2
......
- -x, --no-acl
Prevents the dumping of access permissions (GRANT/REVOKE commands). Only ACL objects are affected. Privilege objects are not affected.
- -q, --target=VERSION
Exports text files compatible with databases of other versions. Currently, parameters of V1 and V5 are supported. If other parameters are specified, no error is reported but the setting does not take effect. The v1 value means to export GaussDB v5 data as a text file compatible with GaussDB v1. The v5 value means to export GaussDB v5 data as a text file, reducing errors that may occur when data is imported to a GaussDB v5 database.
When v1 is specified, you are advised to use it along with parameters such as --exclude-guc="enable_cluster_resize", --exclude-function, and --exclude-with. Otherwise, an error may be reported during data import to a GaussDB v1 database.
- -g, --exclude-guc
Reserved for function extension. The option is not recommended.
- --exclude-function
Specifies that functions and stored procedures are not exported.
- --exclude-with
Specifies that the description such as WITH(orientation=row, compression=on) is not added to the end of the exported table definition.
- --binary-upgrade
Reserved for function extension. The option is not recommended.
M-compatible databases do not support this option.
- --binary-upgrade-usermap="USER1=USER2"
Reserved for function extension. The option is not recommended.
- --column-inserts|--attribute-inserts
Exports data by running the INSERT command with explicit column names {INSERT INTO table (column, ...) VALUES ...}. This will cause a slow restoration. However, since this option generates an independent command for each row, an error in reloading a row causes only the loss of the row rather than the entire table content.
M-compatible databases do not support this option.
- --disable-dollar-quoting
Disables the use of dollar sign ($) for function bodies, and forces them to be quoted using the SQL standard string syntax.
- --include-alter-table
- --disable-triggers
Reserved for function extension. The option is not recommended.
- --exclude-table-data=TABLE
Does not dump table data that matches any patterns. The pattern is interpreted according to the same rule as for -t.
--exclude-table-data can be entered more than once to exclude tables matching any of several patterns. When the user needs the specified table definition rather than data in the table, this option is helpful.
To exclude data of all tables in the database, see -s, --schema-only.
- --inserts
Dumps data when the INSERT command (rather than the COPY command) is delivered, resulting in slow recovery.
However, since this option generates an independent command for each row, an error in reloading a row causes only the loss of the row rather than the entire table content. Note that the restoration may fail if you rearrange the column order. The --column-inserts option is unaffected against column order changes.
M-compatible databases do not support this option.
- --no-security-labels
Reserved for function extension. The option is not recommended.
- --no-tablespaces
Does not select any tablespaces. All the objects will be created during the restoration process, no matter which tablespace is selected when using this option.
This parameter is used only for the plain-text format. For the archive format, you can specify the option when using gs_restore.
- --no-unlogged-table-data
Reserved for function extension. The option is not recommended.
- --non-lock-table
Shields the table locking behavior during the export using gs_dump. This parameter is called only between software interfaces. You are advised not to call this parameter.
When gs_dump is used to export a table, an access share lock is added to the table to be exported in the entire transaction by default. This lock blocks the execution of concurrent DDL statements on the table, ensuring that no blocking occurs due to the impact on concurrent DDL statements.
- --quote-all-identifiers
Forcibly quotes all identifiers. In the migration to a later version, additional keywords may be introduced. This option is helpful.
M-compatible databases do not support this option.
- --section=SECTION
- --serializable-deferrable
Uses a serializable transaction for the dump to ensure that the used snapshot is consistent with later database status. Perform this operation at a time point in the transaction flow, at which everything is normal. This ensures successful transaction and avoids serialization failures of other transactions, which requires serialization again.
This option has no benefits to disaster recovery. During the upgrade of the original database, loading a database copy as a report or other shared read-only dump is helpful. The option does not exist, dump reveals a status which is different from the commit sequence status of any transaction.
This option will make no difference if there are no active read/write transactions when gs_dump is started. If the read/write transactions are active, the dump start time will be delayed for an uncertain period.
- --use-set-session-authorization
Specifies that the standard SQL SET SESSION AUTHORIZATION command rather than ALTER OWNER is generated to determine the object ownership. When the SET SESSION AUTHORIZATION command is used to dump data, if the object history in the dump file is incorrect, the data may not be restored correctly. A dump by running SET SESSION AUTHORIZATION requires the system administrator permission, whereas running ALTER OWNER requires a lower permission. However, the SET SESSION AUTHORIZATION statement supports ciphertext passwords. The script exported using this parameter may not be restored. Therefore, you are advised not to use this parameter to export the script.
- Application scope of SET SESSION AUTHORIZATION:
- A system administrator can switch to a common user using the SET SESSION AUTHORIZATION statement, but cannot switch to the initial user, SYSADMIN, OPRADMIN, MONADMIN, POLADMIN, or AUDITADMIN.
- Other users cannot switch to another user using the SET SESSION AUTHORIZATION statement.
- If the --use-set-session-authorization parameter is used by a system administrator to export data and the imported data contains objects of the initial user, "ERROR: permission denied to set session authorization" is reported when the system administrator switches to the initial user due to security permission verification and SET SESSION AUTHORIZATION application scope limitations. As a result, the owner of the objects is changed from the initial user to the system administrator role used during the import. In this scenario, you are advised to use the initial user to import and export data.
- Application scope of SET SESSION AUTHORIZATION:
- --with-encryption=AES128
Specifies that dumping data needs to be encrypted using AES-128.
- --with-key=KEY
The AES-128 key rules are as follows:
- Consists of 8 to 16 characters.
- Contains at least three of the following character types: uppercase characters, lowercase characters, digits, and special characters (limited to ~!@#$ %^&*()-_=+\|[{}];:,<.>/?).
Stored procedures and functions cannot be exported in encrypted mode.
- --with-salt=RANDVALUES
- --include-extensions
Includes extensions in the dump.
The extended function is for internal use only. You are advised not to use it.
- --include-depend-objs
Includes information about the objects that depend on the specified object in the backup result. This parameter takes effect only if the -t or --include-table-file parameter is specified.
- --exclude-self
Excludes information about the specified object from the backup result. This parameter takes effect only if the -t or --include-table-file parameter is specified.
- --pipeline
Uses a pipe to transmit the password. This parameter cannot be used on devices.
- --dont-overwrite-file
The existing files in plain-text, .tar, and custom formats will be overwritten. This option is not applicable to the directory format.
For example:
Assume that the backup.sql file exists in the current directory. If you specify -f backup.sql in the input command, and the backup.sql file is generated in the current directory, the original file will be overwritten.
If the backup file exists and --dont-overwrite-file is specified, an error will be reported with the message that the dump file exists.
gs_dump -p port_number testdb -f backup.sql -F plain --dont-overwrite-file
- -s/--schema-only and -a/--data-only do not coexist.
- -c/--clean and -a/--data-only do not coexist.
- --inserts/--column-inserts and -o/--oids do not coexist, because OIDS cannot be set using the INSERT statement.
- --role must be used with --rolepassword.
- --binary-upgrade-usermap must be used with --binary-upgrade.
- --include-depend-objs/--exclude-self takes effect only when -t/--include-table-file is specified.
- --exclude-self must be used with --include-depend-objs.
- --with-encryption=AES-128 supports only -F p/plain.
- --with-key=KEY supports only -F p/plain.
- --with-salt=RANDVALUES is called by gs_dumpall and does not require manual input.
Connection parameters:
- -h, --host=HOSTNAME
Specifies the host name. If the value begins with a slash, it is used as the directory for the UDS. By default, the value is obtained from the PGHOST environment variable (if set).
This parameter is used only for defining names of the hosts outside a database. For the localhost in the database, you can use IPv4 address 127.0.0.1 or IPv6 address ::1.
- -p, --port=PORT
Specifies the host port number. If the thread pool is enabled, you are advised to use the pooler port, that is, the host port number plus 1. By default, the value is obtained from the PGPORT environment variable (if set).
- -U, --username=NAME
Specifies the username for connecting to a host. The initial user cannot be used for cross-node execution. By default, the value is obtained from the PGUSER environment variable (if set).
- -w, --no-password
Never issues a password prompt. The connection attempt fails if the host requires password for authentication and the password is not provided in other ways. This parameter is useful in batch jobs and scripts in which no user password is required.
- -W, --password=PASSWORD
Specifies the user password for connection. If the host uses the trust authentication policy, administrators do not need to enter the -W option. If the -W option is not provided and you are not a system administrator, the system will ask you to enter a password. To ensure system security, you are advised to enter the password in interactive mode.
- --role=ROLENAME
Specifies a role name to be used for creating the dump. If this option is selected, the SET ROLE command will be issued after the database is connected to gs_dump. It is useful when the authenticated user (specified by -U) lacks the permissions required by gs_dump. It allows the user to switch to a role with the required permissions. Some installations have a policy against logging in directly as a system administrator. This option allows dumping data without violating the policy.
- --rolepassword=ROLEPASSWORD
- -f, --file=FILENAME
Examples
In the following examples, backup/MPPDB_backup.sql indicates an exported file, where backup indicates the relative path of the current directory. 37300 indicates the port ID of the database server. testdb indicates the name of the database to be accessed.

Before exporting files, ensure that the directory exists and you have the read and write permissions on the directory.
Example 1
Use gs_dump to export the full information of the testdb database. The exported MPPDB_backup.sql file is in plain-text format.
gs_dump -U omm -f backup/MPPDB_backup.sql -p 37300 testdb -F p
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-27 09:49:17]: The total objects number is 356.
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-27 09:49:17]: [100.00%] 356 objects have been dumped.
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-27 09:49:17]: dump database testdb successfully
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-27 09:49:17]: total time: 1274 ms
Example 2
Use gs_dump to export the full information of the testdb database. The exported MPPDB_backup.tar file is in .tar format.
gs_dump -U omm -f backup/MPPDB_backup.tar -p 37300 testdb -F t
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-27 10:02:24]: The total objects number is 1369.
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-27 10:02:53]: [100.00%] 1369 objects have been dumped.
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-27 10:02:53]: dump database testdb successfully
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-27 10:02:53]: total time: 50086 ms
Example 3
Use gs_dump to export the full information of the testdb database. The exported MPPDB_backup.dmp file is in custom format.
gs_dump -U omm -f backup/MPPDB_backup.dmp -p 37300 testdb -F c
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-27 10:05:40]: The total objects number is 1369.
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-27 10:06:03]: [100.00%] 1369 objects have been dumped.
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-27 10:06:03]: dump database testdb successfully
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-27 10:06:03]: total time: 36620 ms
Example 4
Use gs_dump to export the full information of the testdb database. The exported MPPDB_backup file is in directory format.
gs_dump -U omm -f backup/MPPDB_backup -p 37300 testdb -F d
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-27 10:16:04]: The total objects number is 1369.
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-27 10:16:23]: [100.00%] 1369 objects have been dumped.
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-27 10:16:23]: dump database testdb successfully
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-27 10:16:23]: total time: 33977 ms
Example 5
Use gs_dump to export the information of the testdb database, excluding the information of the table specified in the /home/MPPDB_temp.sql file. The exported MPPDB_backup.sql file is in plain-text format.
gs_dump -U omm -p 37300 testdb --exclude-table-file=/home/MPPDB_temp.sql -f backup/MPPDB_backup.sql
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-27 10:37:01]: The total objects number is 1367.
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-27 10:37:22]: [100.00%] 1367 objects have been dumped.
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-27 10:37:22]: dump database testdb successfully
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-27 10:37:22]: total time: 37017 ms
Example 6
Use gs_dump to export only the information about the views that depend on the testtable table.
gs_dump -U omm -s -p 37300 testdb -t PUBLIC.testtable --include-depend-objs --exclude-self -f backup/MPPDB_backup.sql -F p
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-15 14:12:54]: The total objects number is 331.
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-15 14:12:54]: [100.00%] 331 objects have been dumped.
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-15 14:12:54]: dump database testdb successfully
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-15 14:12:54]: total time: 327 ms
Example 7
In the multi-tenancy scenario, use gs_dump to export the full information about the PDB named testpdb. The exported backup_pdb.sql file is in plain-text format.
gs_dump -U omm testpdb -f backup/backup_pdb.sql -p 20000 -F p gs_dump[user='omm'][localhost][port='20000'][testpdb][2024-04-26 17:41:21]: The total objects number is 459. gs_dump[user='omm'][localhost][port='20000'][testpdb][2024-04-26 17:41:21]: [100.00%] 459 objects have been dumped. gs_dump[user='omm'][localhost][port='20000'][testpdb][2024-04-26 17:41:21]: dump database testpdb successfully gs_dump[user='omm'][localhost][port='20000'][testpdb][2024-04-26 17:41:21]: total time: 5427 ms
Example 8
In the multi-tenancy scenario, use gs_dump to export the full information about the PDB named testpdb. The exported backup_pdb_t.tar file is in tar format.
gs_dump -U omm testpdb -p 20000 -f backup/backup_pdb_t.tar -F t gs_dump[user='omm'][localhost][port='20000'][testpdb][2024-04-26 17:02:40]: The total objects number is 459. gs_dump[user='omm'][localhost][port='20000'][testpdb][2024-04-26 17:02:40]: [100.00%] 459 objects have been dumped. gs_dump[user='omm'][localhost][port='20000'][testpdb][2024-04-26 17:02:40]: dump database testpdb successfully gs_dump[user='omm'][localhost][port='20000'][testpdb][2024-04-26 17:02:40]: total time: 5506 ms
Example 9
In the multi-tenancy scenario, use gs_dump to export the full information about the PDB named testpdb. The exported backup_pdb_c file is in customized archive format.
gs_dump -U omm testpdb -p 20000 -f backup/backup_pdb_c -F c gs_dump[user='omm'][localhost][port='20000'][testpdb][2024-04-26 16:57:19]: The total objects number is 459. gs_dump[user='omm'][localhost][port='20000'][testpdb][2024-04-26 16:57:19]: [100.00%] 459 objects have been dumped. gs_dump[user='omm'][localhost][port='20000'][testpdb][2024-04-26 16:57:19]: dump database testpdb successfully gs_dump[user='omm'][localhost][port='20000'][testpdb][2024-04-26 16:57:19]: total time: 5622 ms
Example 10
In the multi-tenancy scenario, use gs_dump to export the full information about the PDB named testpdb. The exported backup_pdb_dir file is in directory format.
gs_dump -U omm testpdb -p 20000 -f backup/backup_pdb_dir -F d gs_dump[user='omm'][localhost][port='20000'][testpdb][2024-04-26 17:05:46]: The total objects number is 459. gs_dump[user='omm'][localhost][port='20000'][testpdb][2024-04-26 17:05:46]: [100.00%] 459 objects have been dumped. gs_dump[user='omm'][localhost][port='20000'][testpdb][2024-04-26 17:05:46]: dump database testpdb successfully gs_dump[user='omm'][localhost][port='20000'][testpdb][2024-04-26 17:05:46]: total time: 5680 ms
Example 11
In the multi-tenancy scenario, when gs_dump is executed with the -C, --create option to export full information about the PDB named testpdb, gs_dump reports an error and exits.
gs_dump -U omm testpdb -C -p 20000 -f backup/backup_pdb_dir -F d gs_dump unsupport the '-C, --create' option for pdb.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot