gs_dump
Context
gs_dump, provided by GaussDB, is used to export database information. You can export a database or its objects (such as schemas, tables, and views). The database can be the default postgres database or a user-specified database.
When gs_dump is used to export data, other users can still access (read or write) the database.
gs_dump can export complete, consistent data. For example, if gs_dump is started to export database A at T1, data of the database at that time point will be exported, and modifications on the database after that time point will not be exported.
The generated columns are not dumped during gs_dump is used.
gs_dump supports the export of text files that are compatible with the V1 database.
gs_dump can export database information to a plain-text SQL script file or archive file.
- Plain-text SQL script: It contains the SQL statements required to restore the database. You can use gsql to execute the SQL script. With only a little modification, the SQL script can rebuild a database on other hosts or database products.
- Archive file: It contains data required to restore the database. It can be a tar-, directory-, or custom-format archive. For details, see Table 1.
gs_dump supports SSL encrypted communication. The method is the same as that of using gsql.
Before using gs_dump, ensure that the gs_dump version is consistent with the database version. gs_dump of a later version may not be fully compatible with kernel data of an earlier version.
Main Functions
gs_dump can create export files in four formats, which are specified by [-F or --format=], as listed in Table 1.
Format |
Value of -F |
Description |
Suggestion |
Import Tool |
---|---|---|---|---|
Plain-text |
p |
A plain-text script file containing SQL statements and commands. The commands can be executed on gsql, a command line terminal, to rebuild database objects and load table data. |
You are advised to use plain-text export files for small databases. |
Before using gsql to restore database objects, you can use a text editor to edit the plain-text export file as needed. |
Custom |
c |
A binary file that allows the restoration of all or selected database objects from an exported file. |
You are advised to use custom-format archive files for medium-or large-sized database. |
You can use gs_restore to import database objects from a custom-, directory-, or TAR-format archive. |
Directory |
d |
A directory containing directory files and the data files of tables and BLOB objects. |
- |
|
.tar archive |
t |
A tar-format archive that allows the restoration of all or selected database objects from an exported file. It cannot be further compressed and has an 8-GB limitation on the size of a single table. |
- |
To reduce the size of an exported file, you can use the gs_dump tool to compress it to a directory archive file or custom-format file. When a directory archive or custom-format archive is generated, medium-level compression is applied by default. Archived exported files cannot be compressed using gs_dump.
Precautions
- Do not modify the files and contents exported using the -F c/d/t formats. Otherwise, the restoration may fail. For files exported using the -F p format, you can edit the exported files with caution if necessary.
- To ensure the data consistency and integrity, gs_dump acquires a share lock on a table to be dumped. If a share lock has been set for the table in other transactions, gs_dump locks the table after it is released. If the table cannot be locked within the specified time, the dump fails. You can customize the timeout interval to wait for lock release by specifying the --lock-wait-timeout parameter.
- Storing procedures and functions cannot be exported in encrypted mode.
- For materialized views, this tool supports only definition export. After importing materialized views, you need to manually run the REFRESH statement to restore data.
- For temporary objects, this tool can export only global temporary tables.
- This tool cannot be used on standby nodes.
- When gs_dump is used to export partitioned indexes, the attributes of some index partitions cannot be exported, for example, the unusable status of index partitions. You can query the PG_PARTITION system catalog or ADM_IND_PARTITIONS or ADM_IND_SUBPARTITIONS view to obtain the attributes of an index partition. You can run the ALTER INDEX statement to manually set the attributes of an index partition.
- For scheduled tasks, this tool can export only scheduled tasks created using CREATE EVENT or non-periodic scheduled tasks created using advanced packages from a B-compatible database.
Common users cannot export subscriptions, directories, and synonyms. If a common user attempts to export related data, the message "WARNING: xx not dumped because current user is not a superuser" is displayed.
Syntax
gs_dump [OPTION]... [DBNAME]
DBNAME does not follow a short or long option. It specifies the database to connect to.
For example:
Specify DBNAME without a -d option preceding it.
gs_dump -p port_number testdb -f dump1.sql
Or,
export PGDATABASE=testdb
gs_dump -p port_number -f dump1.sql
Environment variable: PGDATABASE
Parameters
Common parameters:
- -f, --file=<FILE_NAME>
Sends the output to the specified file or directory. If this parameter is omitted, the standard output is generated. If the output format is (-F c/-F d/-F t), the -f parameter must be specified. If the value of the -f parameter contains a directory, the directory has the read and write permissions to the current user.
- -F, --format=c|d|t|p
Selects an exported file format. The formats are as follows:
- p|plain: generates a text SQL script file. This is the default value.
- c|custom: outputs a custom-format archive as a directory to be used as the input of gs_restore. This is the most flexible output format in which users can manually select it and reorder the archived items during the restoration process. An archive in this format is compressed by default.
- d|directory: creates a directory containing directory files and the data files of tables and BLOB objects.
- t|tar: outputs a tar-format archive as the input of gs_restore. The .tar format is compatible with the directory format. Extracting a .tar archive generates a valid directory-format archive. However, the .tar archive cannot be further compressed and has an 8-GB limitation on the size of a single table. The order of table data items cannot be changed during restoration.
- -v, --verbose
Specifies the verbose mode. If it is specified, gs_dump writes detailed object comments and number of startups/stops to the dump file, and progress messages to standard errors.
- -V, --version
- -Z, --compress=0-9
Specifies the used compression level.
Value range: 0 to 9
- 0 indicates no compression.
- 1 indicates the lowest compression ratio and the fastest processing speed.
- 9 indicates the highest compression ratio and the slowest processing speed.
For the custom-format archive, this option specifies the compression level of a single table data segment. By default, data is compressed at a medium level. The .tar archive format and plain-text format do not support compression currently.
- --lock-wait-timeout=TIMEOUT
Do not keep waiting to obtain shared table locks at the beginning of the dump. Consider it as failed if you are unable to lock a table within the specified time. The timeout interval can be specified in any of the formats accepted by SET statement_timeout.
- -?, --help
Displays help information about gs_dump parameters and exits.
Dump parameters:
- -a, --data-only
Generates only the data, not the schema (data definition). Dumps the table data, big objects, and sequence values.
- -b, --blobs
Reserved for function extension. The option is not recommended.
- -c, --clean
Before writing the command of creating database objects into the backup files, writes the command of cleaning (deleting) database objects to the backup files. (If no objects exist in the target database, gsql or gs_restore probably displays some error information.)
This parameter is used only for the plain-text format. For the archive format, you can specify the option when using gs_restore.
- -C, --create
Specifies that the backup starts from the command for creating and connecting to a database. (If a command script is executed in this mode, it does not matter which database is connected before the script is executed.)
This parameter is used only for the plain-text format. For the archive format, you can specify the option when using gs_restore.
- -E, --encoding=ENCODING
Creates the dump in the specified character set encoding. By default, the dump file is created in the database encoding. (Alternatively, you can set the environment variable PGCLIENTENCODING to the required dump encoding.)
- -n, --schema=SCHEMA
Dumps only schemas matching the schema names. This option contains the schema and all its contained objects. If this option is not specified, all non-system schemas in the target database will be dumped. Multiple schemas can be selected by specifying multiple -n options. The schema parameter is interpreted as a pattern according to the same rule used by the \d command of gsql. Therefore, multiple schemas can also be selected by writing wildcard characters in the pattern. When you use wildcard characters, quote patterns to prevent the shell from expanding the wildcard characters.
- If -n is specified, gs_dump does not dump any other database objects that the selected schemas might depend upon. Therefore, there is no guarantee that the results of a specific-schema dump can be automatically restored to an empty database.
- If -n is specified, the non-schema objects are not dumped.
- In M-compatibility mode, if a database with templatem is created by running CREATE DATABASE, data is exported by specifying db_name; if a database is created by running CREATE DATABASE db_name, data is exported by specifying -n because such database is equivalent to a schema.
Multiple schemas can be dumped. Entering -n schemaname multiple times dumps multiple schemas.
For example:
gs_dump -h host_name -p port_number testdb -f backup/bkp_shl2.sql -n sch1 -n sch2
In the preceding example, sch1 and sch2 are dumped.
- -N, --exclude-schema=SCHEMA
Does not dump any schemes matching the schema pattern. The pattern is interpreted according to the same rule as for -n. -N can be specified multiple times to exclude schemas matching any of the specified patterns.
When both -n and -N are specified, the schemas that match at least one -n option but no -N is dumped. If -N is specified and -n is not, the schemas matching -N are excluded from what is normally dumped.
Dump allows you to exclude multiple schemas during dumping.
Specifies -N exclude schema name to exclude multiple schemas while dumping.
For example:
gs_dump -h host_name -p port_number testdb -f backup/bkp_shl2.sql -N sch1 -N sch2
In the preceding example, sch1 and sch2 will be excluded during the dumping.
- -o, --oids
Dumps OIDs as parts of the data in each table. Use this option if your application references the OID columns in some way (for example, in a foreign key constraint). If the preceding situation does not occur, do not use this parameter.
- -O, --no-owner
Do not output commands to set ownership of objects to match the original database. By default, gs_dump issues the ALTER OWNER or SET SESSION AUTHORIZATION command to set ownership of created database objects. These statements will fail when the script is running unless it is started by a system administrator (or the same user who owns all of the objects in the script). To make a script that can be stored by any user and give the user ownership of all objects, specify -O.
This parameter is used only for the plain-text format. For the archive format, you can specify the option when using gs_restore.
- -s, --schema-only
- -S, --sysadmin=NAME
Reserved for function extension. The option is not recommended.
- -t, --table=TABLE
Specifies a list of tables, views, sequences, or foreign tables not to be dumped. You can use multiple -t parameters or wildcard characters to specify tables.
When you use wildcard characters, quote patterns to prevent the shell from expanding the wildcard characters.
The -n and -N options have no effect when -t is used, because tables selected by using -t will be dumped regardless of those options, and non-table objects will not be dumped.
- The number of -t parameters must be less than or equal to 100.
- If the number of -t parameters is greater than 100, you are advised to use the --include-table-file parameter to replace some -t parameters.
- If -t is specified, gs_dump does not dump any other database objects that the selected tables might depend upon. Therefore, there is no guarantee that the results of a specific-table dump can be automatically restored to an empty database.
- -t tablename only dumps visible tables in the default search path. -t *.tablename dumps tablename tables in all the schemas of the dumped database. -t schema.table dumps tables in a specific schema.
- -t tablename does not export trigger information from a table.
- If the table name contains uppercase letters, you need to add \" to the table name when using the -t parameter to specify the export. To export the "abC" table, specify -t \"abC\". To export the schema."abC" table, specify -t schema.\"abC\".
- -t "" does not match any table.
For example:
gs_dump -h host_name -p port_number testdb -f backup/bkp_shl2.sql -t schema1.table1 -t schema2.table2
In the preceding example, schema1.table1 and schema2.table2 are dumped.
- --include-table-file=<FILE_NAME>
- -T, --exclude-table=TABLE
Specifies a list of table, view, sequence, or foreign table objects not to be dumped. You can use multiple -T parameters or wildcard characters to specify multiple lists.
When both -t and -T are specified, it will dump the objects in the -t list, but not those in the -T list.
For example:
gs_dump -h host_name -p port_number testdb -f backup/bkp_shl2.sql -T table1 -T table2
In the preceding example, table1 and table2 are excluded from the dumping.
- --exclude-table-file=<FILE_NAME>
Specifies the table file not to be dumped.
Same as --include-table-file, the content format of this parameter is as follows:
schema1.table1
schema2.table2
......
- -x, --no-acl
Prevents the dumping of access permissions (grant/revoke commands). Only ACL objects are affected. Privilege objects are not affected.
- -q, --target
Exports text files compatible with databases of other versions. Currently, only the v1 and v5 can be specified. If other parameter values are specified, no error is reported but the setting does not take effect. The v1 value means to export GaussDB v5 data as a text file compatible with GaussDB v1. The v5 value means to export GaussDB v5 data as a text file, reducing errors that may occur when data is imported to a GaussDB v5 database.
When v1 is specified, you are advised to use it along with parameters such as --exclude-guc="enable_cluster_resize", --exclude-function, and --exclude-with. Otherwise, an error may be reported during data import to a GaussDB v1 database.
- -g, --exclude-guc
Reserved for function extension. The option is not recommended.
- --exclude-function
Specifies that functions and stored procedures are not exported.
- --exclude-with
Specifies that the description such as WITH(orientation=row, compression=on) is not added to the end of the exported table definition.
- --binary-upgrade
Reserved for function extension. The option is not recommended.
M-compatible databases do not support this option.
- --binary-upgrade-usermap="USER1=USER2"
Reserved for function extension. The option is not recommended.
- --column-inserts/--attribute-inserts
Exports data by running the INSERT command with explicit column names {INSERT INTO table (column, ...) VALUES ...}. This will cause a slow restoration. However, since this option generates an independent command for each row, an error in reloading a row causes only the loss of the row rather than the entire table content.
M-compatible databases do not support this option.
- --disable-dollar-quoting
Disables the use of dollar sign ($) for function bodies, and forces them to be quoted using the SQL standard string syntax.
- --include-alter-table
- --disable-triggers
Reserved for function extension. The option is not recommended.
- --exclude-table-data=TABLE
Does not dump data that matches any of table patterns. The pattern is interpreted according to the same rule as for -t.
--exclude-table-data can be entered more than once to exclude tables matching any of several patterns. When the user needs the specified table definition rather than data in the table, this option is helpful.
To exclude data of all tables in the database, see -s, --schema-only.
- --inserts
Dumps data when the INSERT statement (rather than COPY) is issued. This will cause a slow restoration.
However, since this option generates an independent command for each row, an error in reloading a row causes only the loss of the row rather than the entire table content. The restoration may fail if you rearrange the column order. The --column-inserts option is unaffected against column order changes, though even slower.
M-compatible databases do not support this option.
- --no-security-labels
Reserved for function extension. The option is not recommended.
- --no-tablespaces
Does not select any tablespaces. All the objects will be created during the restoration process, no matter which tablespace is selected when using this option.
This parameter is used only for the plain-text format. For the archive format, you can specify the option when using gs_restore.
- --no-unlogged-table-data
Reserved for function extension. The option is not recommended.
- --non-lock-table
- --quote-all-identifiers
Forcibly quotes all identifiers. This parameter is useful when you dump a database for migration to a later version, in which additional keywords may be introduced.
M-compatible databases do not support this option.
- --section=SECTION
Specifies dumped name sections (pre-data, data, or post-data).
- --serializable-deferrable
Uses a serializable transaction for the dump to ensure that the used snapshot is consistent with later database status. Perform this operation at a time point in the transaction flow, at which everything is normal. This ensures successful transaction and avoids serialization failures of other transactions, which requires serialization again.
This option has no benefits to disaster recovery. During the upgrade of the original database, loading a database copy as a report or other shared read-only dump is helpful. The option does not exist, dump reveals a status which is different from the submitted sequence status of any transaction.
This option will make no difference if there are no active read-write transactions when gs_dump is started. If the read-write transactions are active, the dump start time will be delayed for an uncertain period.
- --use-set-session-authorization
Specifies that the standard SQL SET SESSION AUTHORIZATION command rather than ALTER OWNER is generated to determine the object ownership. This makes dumping more standard. However, if a dump file contains objects that have historical problems, restoration may fail. A dump using SET SESSION AUTHORIZATION requires the system administrator permissions, whereas ALTER OWNER requires lower permissions. However, the SET SESSION AUTHORIZATION statement supports ciphertext passwords. The script exported using this parameter may not be restored. Therefore, you are advised not to use this parameter to export the script.
Application scope of SET SESSION AUTHORIZATION:
- The system administrator can switch to a common user through the SET SESSION AUTHORIZATION statement, but cannot switch to an initial user, sysadmin, opradmin, monadmin, poladmin, or auditadmin.
- Other users cannot switch to another user through the SET SESSION AUTHORIZATION statement.
- --with-encryption=AES128
Specifies that dumping data needs to be encrypted using AES128.
- --with-key=KEY
The AES128 key rules are as follows:
- Consists of 8 to 16 characters.
- Contains at least three of the following character types: uppercase characters, lowercase characters, digits, and special characters (limited to ~!@#$ %^&*()-_=+\|[{}];:,<.>/?).
Storing procedures and functions cannot be exported in encrypted mode.
- --with-salt=RANDVALUES
- --include-extensions
Includes extensions in the dump.
The extended function is for internal use only. You are advised not to use it.
- --include-depend-objs
Includes information about the objects that depend on the specified object in the backup result. This parameter takes effect only if the -t or --include-table-file parameter is specified.
- --exclude-self
Excludes information about the specified object from the backup result. This parameter takes effect only if the -t or --include-table-file parameter is specified.
- --pipeline
Uses a pipe to transmit the password. This parameter cannot be used on devices.
- --dont-overwrite-file
The existing files in plain-text, .tar, or custom format will be overwritten. This option is not applicable to the directory format.
For example:
Assume that the backup.sql file exists in the current directory. If you specify -f backup.sql in the input command, and the backup.sql file is generated in the current directory, the original file will be overwritten.
If the backup file exists and --dont-overwrite-file is specified, an error will be reported with the message that the dump file exists.
gs_dump -p port_number testdb -f backup.sql -F plain --dont-overwrite-file
- The -s/--schema-only and -a/--data-only options do not coexist.
- The -c/--clean and -a/--data-only parameters do not coexist.
- --inserts/--column-inserts and -o/--oids do not coexist, because OIDS cannot be set using the INSERT statement.
- --role must be used with --rolepassword.
- --binary-upgrade-usermap must be used with --binary-upgrade.
- --include-depend-objs/--exclude-self takes effect only when -t/--include-table-file is specified.
- --exclude-self must be used with --include-depend-objs.
- --with-encryption=AES128 supports only -F p/plain.
- --with-key=KEY supports only -F p/plain.
- --with-salt=RANDVALUES is called by gs_dumpall and does not require manual input.
Connection parameters:
- -h, --host=HOSTNAME
Specifies the host name. If the value begins with a slash (/), it is used as the directory for the Unix domain socket. The default is taken from the PGHOST environment variable (if available). Otherwise, a Unix domain socket connection is attempted.
This parameter is valid only for the external database. For the local host in the database, only 127.0.0.1 can be used.
Example: host name
Environment variable: PGHOST
- -p, --port=PORT
Specifies the host port number. If the thread pool is enabled, you are advised to use the pooler port, that is, the host port number plus 1.
Environment variable: PGPORT
- -U, --username=NAME
Specifies the username for connecting to a host. The initial user cannot be used for cross-node execution.
Environment variable: PGUSER
- -w, --no-password
Never issues a password prompt. The connection attempt fails if the host requires password for authentication and the password is not provided in other ways. This parameter is useful in batch jobs and scripts in which no user password is required.
- -W, --password=PASSWORD
Specifies the user password for connection. If the authentication policy of the host is trust, the password of the system administrator is not verified. That is, you do not need to enter the -W option. If this parameter is not specified and you are not a system administrator, the system prompts you to enter the password in interactive mode. To ensure system security, you are advised to enter the password in interactive mode.
- --role=ROLENAME
Specifies a role name to be used for creating the dump. If this option is selected, the SET ROLE command will be issued after the database is connected to gs_dump. It is useful when the authenticated user (specified by -U) lacks the permissions required by gs_dump. It allows the user to switch to a role with the required permissions. Some installations have a policy against logging in directly as a system administrator. This option allows dumping data without violating the policy.
- --rolepassword=ROLEPASSWORD
Description
If your database has any local additions to the template1 database, restore the output of the gs_dump tool into an empty database with caution. Otherwise, you are likely to obtain errors due to duplicate definitions of the added objects. To create an empty database without any local additions, copy data from template0 rather than template1. For example:
CREATE DATABASE foo WITH TEMPLATE template0;
The tar file size must be smaller than 8 GB. (This is the tar file format limitations.) The total size of a .tar archive and any of the other output formats are not limited, except possibly by the OS.
The dump file generated by gs_dump does not contain the statistics used by the optimizer to make execution plans. Therefore, you are advised to run ANALYZE after restoring from a dump file to ensure optimal performance. The dump file does not contain any ALTER DATABASE ... SET commands; these settings are dumped by gs_dumpall, along with database users and other installation settings.
Example
Use the gs_dump to dump a database as an SQL text file or a file in other formats.
In the following examples, backup/MPPDB_backup.sql indicates an exported file where backup indicates the relative path of the current directory. 37300 indicates the port ID of the database server. testdb indicates the name of the database to be accessed.
Before exporting files, ensure that the directory exists and you have the read and write permissions on the directory.
Example 1: Use gs_dump to export the full information of the testdb database. The exported MPPDB_backup.sql file is in plain-text format.
gs_dump -U omm -f backup/MPPDB_backup.sql -p 37300 testdb -F p
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-27 09:49:17]: The total objects number is 356.
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-27 09:49:17]: [100.00%] 356 objects have been dumped.
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-27 09:49:17]: dump database testdb successfully
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-27 09:49:17]: total time: 1274 ms
Use gsql to import data from the export plain-text file.
Example 2: Use gs_dump to export the full information of the testdb database. The exported MPPDB_backup.tar file is in .tar format.
gs_dump -U omm -f backup/MPPDB_backup.tar -p 37300 testdb -F t
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-27 10:02:24]: The total objects number is 1369.
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-27 10:02:53]: [100.00%] 1369 objects have been dumped.
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-27 10:02:53]: dump database testdb successfully
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-27 10:02:53]: total time: 50086 ms
Example 3: Use gs_dump to export the full information of the testdb database. The exported MPPDB_backup.dmp file is in custom format.
gs_dump -U omm -f backup/MPPDB_backup.dmp -p 37300 testdb -F c
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-27 10:05:40]: The total objects number is 1369.
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-27 10:06:03]: [100.00%] 1369 objects have been dumped.
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-27 10:06:03]: dump database testdb successfully
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-27 10:06:03]: total time: 36620 ms
Example 4: Use gs_dump to export the full information of the testdb database. The exported MPPDB_backup file is in directory format.
gs_dump -U omm -f backup/MPPDB_backup -p 37300 testdb -F d
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-27 10:16:04]: The total objects number is 1369.
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-27 10:16:23]: [100.00%] 1369 objects have been dumped.
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-27 10:16:23]: dump database testdb successfully
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-27 10:16:23]: total time: 33977 ms
Example 5: Use gs_dump to export the information of the testdb database, excluding the information of the table specified in the /home/MPPDB_temp.sql file. The exported MPPDB_backup.sql file is in plain-text format.
gs_dump -U omm -p 37300 testdb --exclude-table-file=/home/MPPDB_temp.sql -f backup/MPPDB_backup.sql
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-27 10:37:01]: The total objects number is 1367.
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-27 10:37:22]: [100.00%] 1367 objects have been dumped.
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-27 10:37:22]: dump database testdb successfully
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-27 10:37:22]: total time: 37017 ms
Example 6: Use gs_dump to export only the information about the views that depend on the testtable table. Create another testtable table, and then restore the views that depend on it.
Back up only the views that depend on the testtable table.
gs_dump -U omm -s -p 37300 testdb -t PUBLIC.testtable --include-depend-objs --exclude-self -f backup/MPPDB_backup.sql -F p
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-15 14:12:54]: The total objects number is 331.
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-15 14:12:54]: [100.00%] 331 objects have been dumped.
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-15 14:12:54]: dump database testdb successfully
gs_dump[user='omm'][localhost][port='37300'][testdb][2018-06-15 14:12:54]: total time: 327 ms
Change the name of the testtable table.
gsql -p 37300 testdb -r -c "ALTER TABLE PUBLIC.testtable RENAME TO testtable_bak;"
Create another testtable table.
CREATE TABLE PUBLIC.testtable(a int, b int, c int);
Restore the views for the new testtable table.
gsql -p 37300 testdb -r -f backup/MPPDB_backup.sql
Related Commands
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot