CREATE TABLE
Description
Creates an initially empty table in the current database. The table will be owned by the creator.
Precautions
- If an error occurs during table creation, after it is fixed, the system may fail to delete the empty disk files created before the last automatic clearance. This problem seldom occurs and does not affect system running of the database.
- When JDBC is used, the DEFAULT value can be set through PrepareStatement.
- A user granted with the CREATE ANY TABLE permission can create tables in the public and user schemas. To create a table that contains serial columns, you must also obtain the CREATE ANY SEQUENCE permission to create sequences.
- The XML type cannot be used as a primary key or foreign key.
- The number of table constraints cannot exceed 32,767.
Syntax
CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXISTS ] table_name ({ column_name data_type [ CHARACTER SET | CHARSET charset ] [ compress_mode ] [ COLLATE collation ] [ column_constraint [ ... ] ] | table_constraint | LIKE source_table [ like_option [...] ] } [, ... ]) [ AUTO_INCREMENT [ = ] value ] [ [DEFAULT] CHARACTER SET | CHARSET [ = ] default_charset ] [ [DEFAULT] COLLATE [ = ] default_collation ] [ WITH ( {storage_parameter = value} [, ... ] ) ] [ ON COMMIT { PRESERVE ROWS | DELETE ROWS } ] [ COMPRESS | NOCOMPRESS ] [ TABLESPACE tablespace_name ];
- column_constraint is as follows:
[ CONSTRAINT constraint_name ] { NOT NULL | NULL | CHECK ( expression ) | DEFAULT default_expr | GENERATED ALWAYS AS ( generation_expr ) [STORED] | AUTO_INCREMENT | UNIQUE [KEY] index_parameters | ENCRYPTED WITH ( COLUMN_ENCRYPTION_KEY = column_encryption_key, ENCRYPTION_TYPE = encryption_type_value ) | PRIMARY KEY index_parameters | REFERENCES reftable [ ( refcolumn ) ] [ MATCH FULL | MATCH PARTIAL | MATCH SIMPLE ] [ ON DELETE action ] [ ON UPDATE action ] } [ DEFERRABLE | NOT DEFERRABLE | INITIALLY DEFERRED | INITIALLY IMMEDIATE ]
- compress_mode of a column is as follows:
{ DELTA | PREFIX | DICTIONARY | NUMSTR | NOCOMPRESS }
- table_constraint is as follows:
[ CONSTRAINT [ constraint_name ] ] { CHECK ( expression ) | UNIQUE [ index_name ][ USING method ] ( { { column_name | ( expression ) } [ ASC | DESC ] } [, ... ] ) index_parameters | PRIMARY KEY [ USING method ] ( { column_name [ ASC | DESC ] } [, ... ] ) index_parameters | FOREIGN KEY [ index_name ] ( column_name [, ... ] ) REFERENCES reftable [ (refcolumn [, ... ] ) ] [ MATCH FULL | MATCH PARTIAL | MATCH SIMPLE ] [ ON DELETE action ] [ ON UPDATE action ] } [ DEFERRABLE | NOT DEFERRABLE | INITIALLY DEFERRED | INITIALLY IMMEDIATE ]
- like_option is as follows:
{ INCLUDING | EXCLUDING } { DEFAULTS | GENERATED | CONSTRAINTS | INDEXES | STORAGE | COMMENTS | PARTITION | RELOPTIONS | ALL }
- index_parameters is as follows:
[ WITH ( {storage_parameter = value} [, ... ] ) ] [ USING INDEX TABLESPACE tablespace_name ]
Parameters
- UNLOGGED
If this keyword is specified, the created table is an unlogged table. Data written to unlogged tables is not written to the WALs, which makes them considerably faster than ordinary tables. However, an unlogged table is automatically truncated after conflicts, OS restart, database restart, primary/standby switchover, power-off, or abnormal shutdown, incurring data loss risks. Contents of an unlogged table are also not replicated to standby nodes. Any indexes created on an unlogged table are not automatically logged as well.
Usage scenario: Unlogged tables do not ensure data security. Users can back up data before using unlogged tables; for example, users should back up the data before a system upgrade.
Troubleshooting: If data is missing in the indexes of unlogged tables due to some unexpected operations such as an unclean shutdown, users should rebuild the indexes with errors.
- GLOBAL | LOCAL
When creating a temporary table, you can specify the GLOBAL or LOCAL keyword before TEMP or TEMPORARY. If the keyword GLOBAL is specified, GaussDB creates a global temporary table. Otherwise, GaussDB creates a local temporary table.
- TEMPORARY | TEMP
If TEMP or TEMPORARY is specified, the created table is a temporary table. Temporary tables are classified into global temporary tables and local temporary tables. If the keyword GLOBAL is specified when a temporary table is created, the table is a global temporary table. Otherwise, the table is a local temporary table.
The metadata of the global temporary table is visible to all sessions. After the sessions end, the metadata still exists. The user data, indexes, and statistics of a session are isolated from those of another session. Each session can only view and modify the data committed by itself. Global temporary tables can be created using ON COMMIT PRESERVE ROWS or ON COMMIT DELETE ROWS. The former one creates a session-level table, where user data is automatically cleared when a session ends; the latter creates a transaction-level table, where user data is automatically cleared when a commit or rollback operation is performed. If the ON COMMIT option is not specified during table creation, the session level is used by default. Different from local temporary tables, you can specify a schema that does not start with pg_temp_ when creating a global temporary table.
A local temporary table is automatically dropped at the end of the current session. Therefore, you can create and use temporary tables in the current session as long as the connected database node in the session is normal. Temporary tables are created only in the current session. If a DDL statement involves operations on temporary tables, a DDL error will be generated. Therefore, you are advised not to perform operations on temporary tables in DDL statements. TEMP is equivalent to TEMPORARY.
- Local temporary tables are visible to the current session through the schema starting with pg_temp. Users should not delete schemas starting with pg_temp or pg_toast_temp.
- If TEMPORARY or TEMP is not specified when you create a table but its schema is set to that starting with pg_temp_ in the current session, the table will be created as a temporary table.
- If global temporary tables and indexes are being used by other sessions, do not perform ALTER or DROP (except the ALTER INDEX index_name REBUILD command).
- The DDL of a global temporary table affects only the user data and indexes of the current session. For example, TRUNCATE, REINDEX, and ANALYZE are valid only for the current session.
- You can set the GUC parameter max_active_global_temporary_table to determine whether to enable the global temporary table function. If max_active_global_temporary_table is set to 0, the global temporary table function is disabled.
- A temporary table is visible only to the current session. Therefore, it cannot be used together with \parallel on.
- Temporary tables do not support primary/standby switchover.
- The global temporary table does not respond to automatic clearance. In persistent connection scenarios, you are advised to use the global temporary table in the ON COMMIT DELETE ROWS clause or periodically and manually execute the VACUUM statement. Otherwise, Clogs may not be reclaimed.
- Global temporary tables do not support the following scenarios:
- Global temporary sequences cannot be created. Global temporary tables of each session use shared sequences, which can only ensure uniqueness but not continuity.
- Global temporary views cannot be created.
- Partitioned tables cannot be created.
- Hash bucket tables cannot be created.
- Extended statistics are not supported.
- The ON COMMIT DROP attribute is not supported.
- IF NOT EXISTS
Sends a notice, but does not throw an error, if a table with the same name exists.
- table_name
Specifies the name of the table to be created.
Some processing logic of materialized views determines whether a table is the log table of a materialized view or a table associated with a materialized view based on the table name prefix. Therefore, do not create a table whose name prefix is mlog_ or matviewmap_. Otherwise, some functions of the table are affected.
- column_name
Specifies the name of a column to be created in the new table.
- constraint_name
Specifies the name of the constraint specified during table creation.
The constraint_name is optional in B-compatible mode (sql_compatibility = 'B'). For other modes, constraint_name must be added.
- index_name
Index name
- The index_name is supported only in B-compatible databases (that is, sql_compatibility = 'B').
- For foreign key constraints, if constraint_name and index_name are specified at the same time, constraint_name is used as the index name.
- For a unique key constraint, if both constraint_name and index_name are specified, index_name is used as the index name.
- USING method
Specifies the name of the index method to be used.
For details about the value range, see USING method in Parameters.
- The USING method is supported only in B-compatible databases (that is, sql_compatibility = 'B').
- In B-compatible mode, if USING method is not specified, the default index method is btree for ASTORE or UB-tree for USTORE.
- ASC | DESC
ASC specifies an ascending (default) sort order. DESC specifies a descending sort order.
The ASC|DESC is supported only in B-compatible databases (that is, sql_compatibility = 'B').
- expression
Specifies an expression index constraint based on one or more columns of the table. It must be written in parentheses.
Expression indexes in the UNIQUE constraint are supported only in B-compatible databases (that is, sql_compatibility = 'B').
- data_type
Specifies the data type of the column.
- compress_mode
Specifies whether to compress a table column. The option specifies the algorithm preferentially used by table columns. Row-store tables do not support compression.
Value range: DELTA, PREFIX, DICTIONARY, NUMSTR, and NOCOMPRESS
- CHARACTER SET | CHARSET charset
This syntax is supported only in B-compatible databases (that is, sql_compatibility = 'B'). Specifies the character set of a table column. If this parameter is specified separately, the collation of the table column is set to the default collation of the specified character set.
- COLLATE collation
Assigns a collation to the column (which must be of a collatable data type). If no collation is specified, the default collation is used. You can run the select * from pg_collation; command to query collation rules from the pg_collation system catalog. The default collation rule is the row starting with default in the query result. In B-compatible databases (that is, sql_compatibility = 'B'), utf8mb4_bin, utf8mb4_general_ci, utf8mb4_unicode_ci and binary are also supported.
- Only the character type supports the specified character set. If the binary character set or collation is specified, the character type is converted to the corresponding binary type. If the type mapping does not exist, an error is reported. Currently, only the mapping from the TEXT type to the BLOB type is available.
- Except the binary character set and collation, only the character set that is the same as the database encoding can be specified.
- If the character set or collation of a column is not explicitly specified and the default character set or collation of the table is specified, the character set or collation of the column is inherited from the table. If the default character set or collation of a table does not exist, the character set and collation of table columns inherit the character set and collation of the current database when b_format_behavior_compat_options is set to 'default_collation'.
Table 1 Character sets and collation supported in mode B (sql_compatibility = 'B') Collation
Character Set
Description
utf8mb4_general_ci
utf8mb4 (utf8)
The general collation is used, which is case insensitive.
utf8mb4_unicode_ci
utf8mb4 (utf8)
The general collation is used, which is case insensitive.
utf8mb4_bin
utf8mb4 (utf8)
The binary collation is used, which is case sensitive.
binary
binary
The binary collation is used.
- LIKE source_table [ like_option ... ]
Specifies a table from which the new table automatically copies all column names, their data types, and their not-null constraints.
The new table and the original table are decoupled after creation is complete. Changes to the original table will not be applied to the new table, and it is not possible to include data of the new table in scans of the original table.
Columns and constraints copied by LIKE are not merged with the same name. If the same name is specified explicitly or in another LIKE clause, an error is reported.
- The default expressions are copied from the original table to the new table only if INCLUDING DEFAULTS is specified. The default behavior is to exclude default expressions, resulting in the copied columns in the new table having default values NULL.
- If INCLUDING GENERATED is specified, the generated expression of the source table column is copied to the new table. By default, the generated expression is not copied.
- The CHECK constraints are copied from the original table to the new table only when INCLUDING CONSTRAINTS is specified. Other types of constraints are never copied to the new table. Not-null constraints are always copied to the new table. These rules also apply to column constraints and table constraints.
- Any indexes on the original table will not be created on the new table, unless the INCLUDING INDEXES clause is specified.
- STORAGE settings for the source column definitions are copied only if INCLUDING STORAGE is specified. The default behavior is to exclude STORAGE settings.
- If INCLUDING COMMENTS is specified, comments for the copied columns, constraints, and indexes are copied. The default behavior is to exclude comments.
- If INCLUDING PARTITION is specified, the partition definitions of the source table are copied to the new table, and the new table no longer uses the PARTITION BY clause. The default behavior is to exclude partition definition of the original table. If the source table has an index, you can use the INCLUDING PARTITION INCLUDING INDEXES syntax. If only INCLUDING INDEXES is used for a partitioned table, the target table will be defined as an ordinary table, but the index is a partitioned index. In this case, an error will be reported because ordinary tables do not support partitioned indexes.
- If INCLUDING RELOPTIONS is specified, the new table will copy the storage parameter (that is, WITH clause) of the source table. The default behavior is to exclude partition definition of the storage parameter of the original table.
- INCLUDING ALL contains the meaning of INCLUDING DEFAULTS, INCLUDING GENERATED, INCLUDING CONSTRAINTS, INCLUDING INDEXES, INCLUDING STORAGE, INCLUDING COMMENTS, INCLUDING PARTITION, and INCLUDING RELOPTIONS.
- If the original table contains a sequence with the SERIAL, BIGSERIAL, SMALLSERIAL, or LARGESERIAL data type, or a column in the original table is a sequence by default and the sequence is created for this table (created by using CREATE SEQUENCE ... OWNED BY), these sequences will not be copied to the new table, and another sequence specific to the new table will be created. This is different from earlier versions. To share a sequence between the source table and new table, create a shared sequence (do not use OWNED BY) and set a column in the source table to this sequence.
- You are advised not to set a column in the source table to the sequence specific to another table especially when the table is distributed in specific node groups, because doing so may result in CREATE TABLE ... LIKE execution failures. In addition, doing so may cause the sequence to become invalid in the source sequence because the sequence will also be deleted from the source table when it is deleted from the table that the sequence is specific to. To share a sequence among multiple tables, you are advised to create a shared sequence for them.
- EXCLUDING of a partitioned table must be used together with INCLUDING ALL, for example, INCLUDING ALL EXCLUDING DEFAULTS, except for DEFAULTS of the source partitioned table.
- AUTO_INCREMENT [ = ] value
This clause specifies an initial value for an auto-increment column. The value must be a positive integer and cannot exceed 2127 – 1.
This clause takes effect only when sql_compatibility is set to 'B'.
- WITH ( { storage_parameter = value } [, ... ] )
Specifies an optional storage parameter for a table or an index. The WITH clause used for tables can also contain OIDS=FALSE to specify that rows of the new table should not contain OIDs.
When using Numeric of any precision to define a column, specify precision p and scale s. When precision and scale are not specified, the input will be displayed.
The description of parameters is as follows:
- FILLFACTOR
The fill factor of a table is a percentage from 10 to 100. If the Ustore storage engine is used, the default value is 92. If the Astore storage engine is used, the default value is 100 (completely filled). When a smaller fill factor is specified, INSERT operations fill table pages only to the indicated percentage. The remaining space on each page is reserved for updating rows on that page. This gives UPDATE a chance to place the updated copy of a row on the same page, which is more efficient than placing it on a different page. For a table whose entries are never updated, setting the fill factor to 100 (complete filling) is the best choice, but in heavily updated tables a smaller fill factor would be appropriate.
Value range: 10–100
- ORIENTATION
Specifies the storage mode of table data. This parameter can be set only once.
Value range:
- ROW indicates that table data is stored in rows.
ROW applies to OLTP service and scenarios with a large number of point queries or addition/deletion operations.
Default value:
If an ordinary tablespace is specified, the default is ROW.
- ROW indicates that table data is stored in rows.
- STORAGE_TYPE
Specifies the storage engine type. This parameter cannot be modified once it is set.
Value range:
- USTORE indicates that tables support the inplace-update storage engine. Note that the track_counts and track_activities parameters must be enabled when the Ustore table is used. Otherwise, space bloating may occur.
- ASTORE indicates that tables support the append-only storage engine.
Default value:
If no table is specified, data is stored in inplace-update mode by default.
- INIT_TD
Specifies the number of TDs to be initialized when an Ustore table is created. This parameter can be modified by running the ALTER TABLE command. Note that this parameter affects the maximum size of a single tuple stored on the data page. The conversion method is MAX_TUPLE_SIZE = BLCKSZ - INIT_TD * TD_SIZE. For example, if you change the number of INIT_TD from 4 to 8, the maximum size of a single tuple decreases by 4 x INIT_TD.
Value ranges: 2–128. The default value is 4.
- COMPRESSION
Specifies the compression level of table data. It determines the compression ratio and time. Generally, the higher the level of compression, the higher the ratio, the longer the time; and the lower the level of compression, the lower the ratio, the shorter the time. The actual compression ratio depends on the distribution mode of table data loaded. Row-store tables do not support compression.
Value range:
The default value for row-store tables is NO.
- COMPRESSLEVEL
Specifies the table data compression ratio and duration at the same compression level. This divides a compression level into sublevels, providing more choices for compression ratio and duration. As the value becomes greater, the compression ratio becomes higher and duration longer at the same compression level.
Value range: 0 to 3. The default value is 0.
- segment
- enable_tde
Creates a TDE table. The prerequisite is that the GUC parameter enable_tde is enabled, the KMS service is enabled, and the GUC parameter tde_cmk_id is correctly configured. This parameter supports only row-store tables. Temporary tables are not supported. The Ustore storage engine is not supported.
Value range: on and off If this parameter is set to on, transparent data encryption is enabled. If this parameter is set to off, transparent data encryption is disabled but the encryption function will be enabled later. When a table is created, a data encryption key will be applied from KMS.
Default value: off
- parallel_workers
Number of bgworker threads started when an index is created. For example, value 2 indicates that two bgworker threads are started to create indexes concurrently.
Value range: [0,32], int type. If this parameter is not set, concurrent index creation is disabled.
Default value: If this parameter is not set, the concurrent index creation function is disabled.
- encrypt_algo
Specifies the transparent data encryption algorithm. Before enabling this function, ensure that enable_tde must be set for a table. The encryption algorithm can be specified only when a table is created. Different tables support different encryption algorithms. After the table is created, the encryption algorithm cannot be changed.
Value range: a string. The value can be AES_128_CTR or SM4_CTR.
If enable_tde is not set, the default value is null. If enable_tde is set to on or off and encrypt_algo is not set, the value is AES_128_CTR.
- dek_cipher
Ciphertext of the key used for transparent data encryption. When enable_tde is enabled, the system automatically applies for ciphertext creation. You cannot specify the ciphertext. The key rotation function can be used to update the key.
Value range: a string.
If encryption is disabled, the default value is null by default.
- cmk_id
Master key ID of the database instance used for transparent data encryption. When enable_tde is enabled, the value is obtained from the GUC parameter tde_cmk_id and cannot be specified or modified by users.
Value range: a string.
If encryption is disabled, the default value is null by default.
- hasuids
If this parameter is set to on, a unique table-level ID is allocated to a tuple when the tuple is updated.
Value range: on and off
Default value: off
- collate
In B-compatible databases (sql_compatibility = 'B'), this parameter is used to record the default collation of tables. Generally, this parameter is used only for internal storage, import, and export. You are advised not to specify or modify this parameter.
Value range: OIDs in the collation that is independently supported in B-compatible databases.
Default value: 0.
- FILLFACTOR
- WITHOUT OIDS
It is equivalent to WITH(OIDS=FALSE).
- ON COMMIT { PRESERVE ROWS | DELETE ROWS }
ON COMMIT determines what to do when you commit a temporary table creation operation. Currently, PRESERVE ROWS and DELETE ROWS are supported.
- PRESERVE ROWS (default): No special action is taken at the ends of transactions. The temporary table and its table data are unchanged.
- DELETE ROWS: All rows in the temporary table will be deleted at the end of each transaction block.
- COMPRESS | NOCOMPRESS
If you specify COMPRESS in the CREATE TABLE statement, the compression feature is triggered in case of a bulk INSERT operation. If this feature is enabled, a scan is performed for all tuple data within the page to generate a dictionary and then the tuple data is compressed and stored. If NOCOMPRESS is specified, the table is not compressed. Row-store tables do not support compression.
Default value: NOCOMPRESS, that is, tuple data is not compressed before storage.
- TABLESPACE tablespace_name
Specifies the tablespace where the new table is created. If not specified, the default tablespace is used.
- CONSTRAINT constraint_name
Specifies the name of a column or table constraint. The optional constraint clauses specify constraints that new or updated rows must satisfy for an INSERT or UPDATE operation to succeed.
There are two ways to define constraints:
- A column constraint is defined as part of a column definition, and it is bound to a particular column.
- A table constraint is not bound to a particular column but can apply to more than one column.
- NOT NULL
The column is not allowed to contain null values.
- NULL
The column is allowed to contain null values. This is the default setting.
This clause is only provided for compatibility with non-standard SQL databases. It is not recommended.
- CHECK ( expression )
Specifies an expression producing a Boolean result where the INSERT or UPDATE operation of new or updated rows can succeed only when the expression result is TRUE or UNKNOWN; otherwise, an error is thrown and the database is not altered.
A CHECK constraint specified as a column constraint should reference only the column's value, while an expression in a table constraint can reference multiple columns.
<>NULL and !=NULL are invalid in an expression. Change them to IS NOT NULL.
- DEFAULT default_expr
Assigns a default data value to a column. The value can be any variable-free expressions. (Subqueries and cross-references to other columns in the current table are not allowed.) The data type of the default expression must match the data type of the column.
The default expression will be used in any insert operation that does not specify a value for the column. If there is no default value for a column, then the default value is null.
- GENERATED ALWAYS AS ( generation_expr ) [STORED]
This clause creates a column as a generated column. The value of the generated column is calculated by generation_expr when data is written (inserted or updated). STORED indicates that the value of the generated column is stored as a common column.
- The STORED keyword can be omitted, which has the same semantics as not omitting STORED.
- The generation expression cannot refer to data other than the current row in any way. The generation expression cannot reference other generation columns or system columns. The generation expression cannot return a result set. No subquery, aggregate function, or window function can be used. The function called by the generation expression can only be an immutable function.
- Default values cannot be specified for generated columns.
- The generated column cannot be used as a part of the partition key.
- Do not specify the generated column and the CASCADE, SET NULL, and SET DEFAULT actions of the ON UPDATE constraint clause at the same time. Do not specify the generated column and the SET NULL, and SET DEFAULT actions of the ON DELETE constraint clause at the same time.
- The method of modifying and deleting generated columns is the same as that of common columns. Delete the common column that the generated column depends on. The generated column is automatically deleted. The type of the column on which the generated column depends cannot be changed.
- The generated column cannot be directly written. In the INSERT or UPDATE statement, values cannot be specified for generated columns, but the keyword DEFAULT can be specified.
- The permission control for generated columns is the same as that for common columns.
- AUTO_INCREMENT
Specifies an auto-increment column.
If the value of this column is not specified (or the value of this column is set to 0, NULL, or DEFAULT), the value of this column is automatically increased by the auto-increment counter.
If this column is inserted or updated to a value greater than the current auto-increment counter, the auto-increment counter is updated to this value after the command is executed successfully.
The initial auto-increment value is set by the AUTO_INCREMENT [ = ] value clause. If it is not set, the default value 1 is used.
- The auto-increment column can be specified only when sql_compatibility is set to 'B'.
- The data type of the auto-increment column can only be integer, 4-byte or 8-byte floating point, or Boolean.
- Each table can have only one auto-increment column.
- The auto-increment column must be the first column of a primary key constraint or unique constraint.
- The DEFAULT value cannot be specified for an auto-increment column.
- The expression of the CHECK constraint cannot contain auto-increment columns, and the expression for generating columns cannot contain auto-increment columns.
- You can specify that the auto-increment column can be NULL. If it is not specified, the auto-increment column contains the NOT NULL constraint by default.
- When a table containing an auto-increment column is created, a sequence that depends on the column is created as an auto-increment counter. You are not allowed to modify or delete the sequence using sequence-related functions. You can view the value of the sequence.
- Sequences are not created for auto-increment columns in local temporary tables.
- The auto-increment and refresh operations of the auto-increment counter are not rolled back.
- [DEFAULT] CHARACTER SET | CHARSET [ = ] default_charset
This syntax is supported only when sql_compatibility is set to 'B'. Specifies the default character set of a table. If you specify a character set separately, the default collation of the table is set to the default collation of the specified character set.
- [DEFAULT] COLLATE [ = ] default_collation
This syntax is supported only when sql_compatibility is set to 'B'. Specifies the default collation of a table. If you specify a collation separately, the default character set of the table is set to the character set corresponding to the specified collation. For details about the supported collation, see Table 1 Character sets and collation supported in mode B (sql_compatibility = 'B').
If the character set or collation of a table is not explicitly specified and the default character set or collation of the schema is specified, the character set or collation of the table is inherited from the schema. If the default character set or collation of a schema does not exist, the character set and collation of the table inherit the character set and collation of the current database when b_format_behavior_compat_options is set to 'default_collation'.
- UNIQUE [KEY] index_parameters
UNIQUE ( column_name [, ... ] ) index_parameters
Specifies that a group of one or more columns of a table can contain only unique values.
For the purpose of a unique constraint, null is not considered equal.
UNIQUE KEY can be used only when sql_compatibility is set to 'B', which has the same semantics as UNIQUE.
- PRIMARY KEY index_parameters
PRIMARY KEY ( column_name [, ... ] ) index_parameters
Specifies that a column or columns of a table can contain only unique (non-duplicate) and non-null values.
Only one primary key can be specified for a table.
- REFERENCES reftable [ ( refcolumn ) ] [ MATCH matchtype ] [ ON DELETE action ] [ ON UPDATE action ] (column constraint)
FOREIGN KEY ( column_name [, ... ] ) REFERENCES reftable [ ( refcolumn [, ... ] ) ] [ MATCH matchtype ] [ ON DELETE action ] [ ON UPDATE action ] (table constraint)
The foreign key constraint requires that the group consisting of one or more columns in the new table should contain and match only the referenced column values in the referenced table. If refcolumn is omitted, the primary key of reftable is used. The referenced column should be the only column or primary key in the referenced table. A foreign key constraint cannot be defined between a temporary table and a permanent table.
There are three types of matching between a reference column and a referenced column:
- MATCH FULL: A column with multiple foreign keys cannot be NULL unless all foreign key columns are NULL.
- MATCH SIMPLE (default): Any unexpected foreign key column can be NULL.
- MATCH PARTIAL: This option is not supported currently.
In addition, when certain operations are performed on the data in the referenced table, the operations are performed on the corresponding columns in the new table. ON DELETE: specifies the operations to be executed after a referenced row in the referenced table is deleted. ON UPDATE: specifies the operation to be performed when the referenced column data in the referenced table is updated. Possible responses to the ON DELETE and ON UPDATE clauses are as follows:
- NO ACTION (default): When a foreign key is deleted or updated, an error indicating that the foreign key constraint is violated is created. If the constraint is deferrable and there are still any referenced rows, this error will occur when the constraint is checked.
- RESTRICT: When a foreign key is deleted or updated, an error indicating that the foreign key constraint is violated is created. It is the same as NO ACTION except that the constraint is not deferrable.
- CASCADE: deletes any row that references the deleted row from the new table, or update the column value of the referenced row in the new table to the new value of the referenced column.
- SET NULL: sets the referenced columns to NULL.
- SET DEFAULT: sets the referenced columns to their default values.
- DEFERRABLE | NOT DEFERRABLE
Controls whether the constraint can be deferred. A constraint that is not deferrable will be checked immediately after every command. Checking of constraints that are deferrable can be postponed until the end of the transaction using the SET CONSTRAINTS command. NOT DEFERRABLE is the default value. Currently, only UNIQUE constraints, primary key constraints, and foreign key constraints accept this clause. All the other constraints are not deferrable.
- INITIALLY IMMEDIATE | INITIALLY DEFERRED
If a constraint is deferrable, this clause specifies the default time to check the constraint.
- If the constraint is INITIALLY IMMEDIATE (default value), it is checked after each statement.
- If the constraint is INITIALLY DEFERRED, it is checked only at the end of the transaction.
The constraint check time can be altered using the SET CONSTRAINTS statement.
- USING INDEX TABLESPACE tablespace_name
Allows selection of the tablespace in which the index associated with a UNIQUE or PRIMARY KEY constraint will be created. If not specified, the index is created in default_tablespace. If default_tablespace is empty, the default tablespace of the database is used.
- ENCRYPTION_TYPE = encryption_type_value
For the encryption type in the ENCRYPTED WITH constraint, the value of encryption_type_value is DETERMINISTIC or RANDOMIZED.
Examples
-- Create a simple table. gaussdb=# CREATE TABLE tpcds.warehouse_t1 ( W_WAREHOUSE_SK INTEGER NOT NULL, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) , W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2) ); gaussdb=# CREATE TABLE tpcds.warehouse_t2 ( W_WAREHOUSE_SK INTEGER NOT NULL, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) , W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60), W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2) );
-- Create a table and set the default value of the W_STATE column to GA. gaussdb=# CREATE TABLE tpcds.warehouse_t3 ( W_WAREHOUSE_SK INTEGER NOT NULL, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) , W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) DEFAULT 'GA', W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2) ); -- Create a table and check whether the W_WAREHOUSE_NAME column is unique at the end of its creation. gaussdb=# CREATE TABLE tpcds.warehouse_t4 ( W_WAREHOUSE_SK INTEGER NOT NULL, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) UNIQUE DEFERRABLE, W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2) );
-- Create a table with its fill factor set to 70%. gaussdb=# CREATE TABLE tpcds.warehouse_t5 ( W_WAREHOUSE_SK INTEGER NOT NULL, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) , W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2), UNIQUE(W_WAREHOUSE_NAME) WITH(fillfactor=70) ); -- Alternatively, user the following syntax: gaussdb=# CREATE TABLE tpcds.warehouse_t6 ( W_WAREHOUSE_SK INTEGER NOT NULL, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) UNIQUE, W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2) ) WITH(fillfactor=70); -- Create a table and specify that its data is not written to WALs. gaussdb=# CREATE UNLOGGED TABLE tpcds.warehouse_t7 ( W_WAREHOUSE_SK INTEGER NOT NULL, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) , W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2) ); -- Create a temporary table. gaussdb=# CREATE TEMPORARY TABLE warehouse_t24 ( W_WAREHOUSE_SK INTEGER NOT NULL, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) , W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2) ); -- Create a local temporary table and specify that this table is dropped when the transaction is committed. gaussdb=# CREATE TEMPORARY TABLE warehouse_t25 ( W_WAREHOUSE_SK INTEGER NOT NULL, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) , W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2) ) ON COMMIT DELETE ROWS; -- Create a global temporary table and specify that the temporary table data is deleted when the session ends. The current Ustore storage engine does not support global temporary tables. gaussdb=# CREATE GLOBAL TEMPORARY TABLE gtt1 ( ID INTEGER NOT NULL, NAME CHAR(16) NOT NULL, ADDRESS VARCHAR(50) , POSTCODE CHAR(6) ) ON COMMIT PRESERVE ROWS; -- Create a table and specify that no error is reported for duplicate tables (if any). gaussdb=# CREATE TABLE IF NOT EXISTS tpcds.warehouse_t8 ( W_WAREHOUSE_SK INTEGER NOT NULL, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) , W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2) ); -- Create an ordinary tablespace. gaussdb=# CREATE TABLESPACE DS_TABLESPACE1 RELATIVE LOCATION 'tablespace/tablespace_1'; -- Specify a tablespace when creating a table. gaussdb=# CREATE TABLE tpcds.warehouse_t9 ( W_WAREHOUSE_SK INTEGER NOT NULL, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) , W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2) ) TABLESPACE DS_TABLESPACE1; -- Separately specify the index tablespace for W_WAREHOUSE_NAME when creating the table. gaussdb=# CREATE TABLE tpcds.warehouse_t10 ( W_WAREHOUSE_SK INTEGER NOT NULL, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) UNIQUE USING INDEX TABLESPACE DS_TABLESPACE1, W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2) );
-- Create a table with a primary key constraint. gaussdb=# CREATE TABLE tpcds.warehouse_t11 ( W_WAREHOUSE_SK INTEGER PRIMARY KEY, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) , W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2) ); -- An alternative for the preceding syntax is as follows: gaussdb=# CREATE TABLE tpcds.warehouse_t12 ( W_WAREHOUSE_SK INTEGER NOT NULL, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) , W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2), PRIMARY KEY(W_WAREHOUSE_SK) ); -- Or use the following statement to specify the name of the constraint: gaussdb=# CREATE TABLE tpcds.warehouse_t13 ( W_WAREHOUSE_SK INTEGER NOT NULL, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) , W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2), CONSTRAINT W_CSTR_KEY1 PRIMARY KEY(W_WAREHOUSE_SK) ); -- Create a table with a compound primary key constraint. gaussdb=# CREATE TABLE tpcds.warehouse_t14 ( W_WAREHOUSE_SK INTEGER NOT NULL, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) , W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2), CONSTRAINT W_CSTR_KEY2 PRIMARY KEY(W_WAREHOUSE_SK, W_WAREHOUSE_ID) ); -- Define a column check constraint. gaussdb=# CREATE TABLE tpcds.warehouse_t19 ( W_WAREHOUSE_SK INTEGER PRIMARY KEY CHECK (W_WAREHOUSE_SK > 0), W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) CHECK (W_WAREHOUSE_NAME IS NOT NULL), W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2) ); gaussdb=# CREATE TABLE tpcds.warehouse_t20 ( W_WAREHOUSE_SK INTEGER PRIMARY KEY, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) CHECK (W_WAREHOUSE_NAME IS NOT NULL), W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2), CONSTRAINT W_CONSTR_KEY2 CHECK(W_WAREHOUSE_SK > 0 AND W_WAREHOUSE_NAME IS NOT NULL) ); -- Create a table with a foreign key constraint. gaussdb=# CREATE TABLE tpcds.city_t23 ( W_CITY VARCHAR(60) PRIMARY KEY, W_ADDRESS TEXT ); gaussdb=# CREATE TABLE tpcds.warehouse_t23 ( W_WAREHOUSE_SK INTEGER NOT NULL, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) , W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) REFERENCES tpcds.city_t23(W_CITY), W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2) ); -- An alternative for the preceding syntax is as follows: gaussdb=# CREATE TABLE tpcds.warehouse_t23 ( W_WAREHOUSE_SK INTEGER NOT NULL, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) , W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2) , FOREIGN KEY(W_CITY) REFERENCES tpcds.city_t23(W_CITY) ); -- Or use the following statement to specify the name of the constraint: gaussdb=# CREATE TABLE tpcds.warehouse_t23 ( W_WAREHOUSE_SK INTEGER NOT NULL, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) , W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2) , CONSTRAINT W_FORE_KEY1 FOREIGN KEY(W_CITY) REFERENCES tpcds.city_t23(W_CITY) ); -- Add a varchar column to the tpcds.warehouse_t19 table.
gaussdb=# ALTER TABLE tpcds.warehouse_t19 ADD W_GOODS_CATEGORY varchar(30); -- Add a check constraint to the tpcds.warehouse_t19 table. gaussdb=# ALTER TABLE tpcds.warehouse_t19 ADD CONSTRAINT W_CONSTR_KEY4 CHECK (W_STATE IS NOT NULL); -- Use one statement to alter the types of two existing columns. gaussdb=# ALTER TABLE tpcds.warehouse_t19 ALTER COLUMN W_GOODS_CATEGORY TYPE varchar(80), ALTER COLUMN W_STREET_NAME TYPE varchar(100); -- This statement is equivalent to the preceding statement. gaussdb=# ALTER TABLE tpcds.warehouse_t19 MODIFY (W_GOODS_CATEGORY varchar(30), W_STREET_NAME varchar(60)); -- Add a not-null constraint to an existing column. gaussdb=# ALTER TABLE tpcds.warehouse_t19 ALTER COLUMN W_GOODS_CATEGORY SET NOT NULL; -- Remove not-null constraints from an existing column. gaussdb=# ALTER TABLE tpcds.warehouse_t19 ALTER COLUMN W_GOODS_CATEGORY DROP NOT NULL; -- Move a table to another tablespace. gaussdb=# ALTER TABLE tpcds.warehouse_t19 SET TABLESPACE PG_DEFAULT; -- Create the joe schema. gaussdb=# CREATE SCHEMA joe; -- Move a table to another schema. gaussdb=# ALTER TABLE tpcds.warehouse_t19 SET SCHEMA joe; -- Rename an existing table. gaussdb=# ALTER TABLE joe.warehouse_t19 RENAME TO warehouse_t23; -- Delete a column from the warehouse_t23 table. gaussdb=# ALTER TABLE joe.warehouse_t23 DROP COLUMN W_STREET_NAME; -- Delete the tablespace, schema joe, and schema tables warehouse. gaussdb=# DROP TABLE tpcds.warehouse_t1; gaussdb=# DROP TABLE tpcds.warehouse_t2; gaussdb=# DROP TABLE tpcds.warehouse_t3; gaussdb=# DROP TABLE tpcds.warehouse_t4; gaussdb=# DROP TABLE tpcds.warehouse_t5; gaussdb=# DROP TABLE tpcds.warehouse_t6; gaussdb=# DROP TABLE tpcds.warehouse_t7; gaussdb=# DROP TABLE tpcds.warehouse_t8; gaussdb=# DROP TABLE tpcds.warehouse_t9; gaussdb=# DROP TABLE tpcds.warehouse_t10; gaussdb=# DROP TABLE tpcds.warehouse_t11; gaussdb=# DROP TABLE tpcds.warehouse_t12; gaussdb=# DROP TABLE tpcds.warehouse_t13; gaussdb=# DROP TABLE tpcds.warehouse_t14; gaussdb=# DROP TABLE tpcds.warehouse_t18; gaussdb=# DROP TABLE tpcds.warehouse_t20; gaussdb=# DROP TABLE tpcds.warehouse_t21; gaussdb=# DROP TABLE tpcds.warehouse_t22; gaussdb=# DROP TABLE joe.warehouse_t23; gaussdb=# DROP TABLE tpcds.warehouse_t24; gaussdb=# DROP TABLE tpcds.warehouse_t25; gaussdb=# DROP TABLESPACE DS_TABLESPACE1; gaussdb=# DROP SCHEMA IF EXISTS joe CASCADE; -- Create table t1, set the default character set of t1 to utf8mb4, the default collation to utf8mb4_bin, the character set of the c1 column to utf8mb4, and its collation to utf8mb4_unicode_ci. gaussdb=# CREATE TABLE T1(C1 VARCHAR(20) CHARSET utf8mb4 COLLATE utf8mb4_unicode_ci) CHARSET = utf8mb4 COLLATE = utf8mb4_unicode_ci;
Helpful Links
Suggestions
- UNLOGGED
- The unlogged table and its indexes do not use the WAL log mechanism during data writing. Their write speed is much higher than that of ordinary tables. Therefore, they can be used for storing intermediate result sets of complex queries to improve query performance.
- The unlogged table has no primary/standby mechanism. In case of system faults or abnormal breakpoints, data loss may occur. Therefore, the unlogged table cannot be used to store basic data.
- TEMPORARY | TEMP
- A temporary table is automatically dropped at the end of a session.
- LIKE
- The new table automatically inherits all column names, data types, and not-null constraints from this table. The new table is irrelevant to the original table after the creation.
- LIKE INCLUDING DEFAULTS
- The default expressions are copied from the original table to the new table only if INCLUDING DEFAULTS is specified. The default behavior is to exclude default expressions, resulting in the copied columns in the new table having default values NULL.
- LIKE INCLUDING CONSTRAINTS
- The CHECK constraints are copied from the original table to the new table only when INCLUDING CONSTRAINTS is specified. Other types of constraints are never copied to the new table. Not-null constraints are always copied to the new table. These rules also apply to column constraints and table constraints.
- LIKE INCLUDING INDEXES
- Any indexes on the original table will not be created on the new table, unless the INCLUDING INDEXES clause is specified.
- LIKE INCLUDING STORAGE
- STORAGE settings for the copied column definitions are copied only if INCLUDING STORAGE is specified. The default behavior is to exclude STORAGE settings.
- LIKE INCLUDING COMMENTS
- If INCLUDING COMMENTS is specified, comments for the copied columns, constraints, and indexes are copied. The default behavior is to exclude comments.
- LIKE INCLUDING PARTITION
- If INCLUDING PARTITION is specified, the partition definitions of the source table are copied to the new table, and the new table no longer uses the PARTITION BY clause. The default behavior is to exclude partition definition of the original table.
List and hash partitioned tables do not support LIKE INCLUDING PARTITION.
- LIKE INCLUDING RELOPTIONS
- If INCLUDING RELOPTIONS is specified, the new table will copy the storage parameter (that is, WITH clause) of the source table. The default behavior is to exclude partition definition of the storage parameter of the original table.
- LIKE INCLUDING ALL
- INCLUDING ALL contains the meaning of INCLUDING DEFAULTS, INCLUDING CONSTRAINTS, INCLUDING INDEXES, INCLUDING STORAGE, INCLUDING COMMENTS, INCLUDING PARTITION, and INCLUDING RELOPTIONS.
- ORIENTATION ROW
- Creates a row-store table. Row-store applies to the OLTP service, which has many interactive transactions. An interaction involves many columns in the table. Using row-store can improve the efficiency.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot