CREATE TABLE
Function
CREATE TABLE creates an initially empty table in the current database. The table will be owned by the creator.
Precautions
- For details about the data types supported by column-store tables, see Data Types Supported by Column-store Tables.
- It is recommended that the number of column-store tables do not exceed 1000.
- The primary key constraint and unique constraint in the table must contain distribution keys.
- Distribution columns do not support the UPDATE operation.
- If an error occurs during table creation, after it is fixed, the system may fail to delete the empty disk files created before the last automatic clearance. This problem seldom occurs and does not affect system running of the database.
- Column-store tables support only PARTIAL CLUSTER KEY table-level constraints, but do not support primary and foreign key table-level constraints.
- Only the NULL, NOT NULL, and DEFAULT constant values can be used as column-store table constraints.
- Whether column-store tables support a delta table is specified by the enable_delta_store parameter. The threshold for storing data into a delta table is specified by the deltarow_threshold parameter.
- When JDBC is used, the DEFAULT value can be set through PrepareStatement.
- Row-store tables do not support table-level FOREIGN KEY constraints.
- According to the concurrency control policy, if the DROP TABLE IF EXIST and CREATE IF EXIST statements are performed on the same table concurrently, one of the two will be rolled back. (The current feature is a lab feature. Contact Huawei technical support before using it.)
- A user granted with the CREATE ANY TABLE permission can create tables in the public and user schemas. To create a table that contains serial columns, you must also be granted with the CREATE ANY SEQUENCE permission to create sequences.
If GaussDB creates unlimited tables, CNs may be affected as follows:
- Resource exhaustion: Each table occupies certain disk space. Unlimited table creation will occupy a large amount of memory and disk space, which may exhaust CN resources. As a result, the system breaks down or becomes unstable.
- Performance deterioration: Unlimited table creation causes a large number of I/O operations and CPU computing, and the metadata information of the database becomes large, which may deteriorate the CN performance, including operations such as insert, query, update, and deletion. As a result, the system responds slowly or cannot meet service requirements.
- Security issues: Excessive tables make database management and maintenance difficult. Creating unlimited tables may cause security issues such as data leakage or data loss. Database stability decreases, causing immeasurable loss to enterprises.
Therefore, plan the number and size of GaussDB databases properly to avoid unlimited table creation and ensure system stability, reliability, and security.
Syntax
- Create a table.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
CREATE [ [ GLOBAL | LOCAL ] [ TEMPORARY | TEMP ] | UNLOGGED ] TABLE [ IF NOT EXISTS ] table_name ({ column_name data_type [ compress_mode ] [ COLLATE collation ] [ column_constraint [ ... ] ] | table_constraint | LIKE source_table [ like_option [...] ] } [, ... ]) [ WITH ( {storage_parameter = value} [, ... ] ) ] [ ON COMMIT { PRESERVE ROWS | DELETE ROWS } ] [ COMPRESS | NOCOMPRESS ] [ TABLESPACE tablespace_name ] [ DISTRIBUTE BY { REPLICATION | HASH ( column_name [, ...] ) | RANGE ( column_name [, ...] ) { SLICE REFERENCES tablename | ( slice_less_than_item [, ...] ) | ( slice_start_end_item [, ...] ) } | LIST ( column_name [, ...] ) { SLICE REFERENCES tablename | ( slice_values_item [, ...] ) } } ] [ TO { GROUP groupname | NODE ( nodename [, ... ] ) } ];
- column_constraint is as follows:
1 2 3 4 5 6 7 8 9 10 11
[ CONSTRAINT constraint_name ] { NOT NULL | NULL | CHECK ( expression ) | DEFAULT default_expr | UNIQUE [ index_parameters ] | PRIMARY KEY [ index_parameters ] | ENCRYPTED WITH ( COLUMN_ENCRYPTION_KEY = column_encryption_key, ENCRYPTION_TYPE = encryption_type_value ) } REFERENCES reftable [ ( refcolumn ) ] [ MATCH FULL | MATCH PARTIAL | MATCH SIMPLE ] [ ON DELETE action ] [ ON UPDATE action ] } [ DEFERRABLE | NOT DEFERRABLE ] [ INITIALLY DEFERRED | INITIALLY IMMEDIATE ]
- compress_mode of a column is as follows:
1
{ DELTA | PREFIX | DICTIONARY | NUMSTR | NOCOMPRESS }
- table_constraint is as follows:
1 2 3 4 5 6
[ CONSTRAINT constraint_name ] { CHECK ( expression ) | UNIQUE ( column_name [, ... ] ) [ index_parameters ] | PRIMARY KEY ( column_name [, ... ] ) [ index_parameters ] | PARTIAL CLUSTER KEY ( column_name [, ... ] ) } [ DEFERRABLE | NOT DEFERRABLE ][ INITIALLY DEFERRED | INITIALLY IMMEDIATE ]
- like_option is as follows:
1
{ INCLUDING | EXCLUDING } { DEFAULTS | CONSTRAINTS | INDEXES | STORAGE | COMMENTS | PARTITION | RELOPTIONS | DISTRIBUTION | ALL }
- Range distribution rules
slice_less_than_item is as follows:
SLICE slice_name VALUES LESS THAN ({ literal | MAXVALUE } [, ...]) [ DATANODE dn_name ]
slice_start_end_item is as follows:
SLICE slice_name_prefix {
{ START ( literal ) END ( literal ) EVERY ( literal ) } |
{ START ( literal ) END ( { literal | MAXVALUE } ) } |
{ START ( literal ) } |
{ END ( { literal | MAXVALUE } ) }
}
- The LIST distribution rule slice_values_item is as follows:
SLICE slice_name VALUES (list_values_item) [DATANODE dn_name]
list_values_item is as follows:
{ DEFAULT | { partition_values_list [, ...] } }
partition_values_list is as follows:
{ (literal [, ...]) }
- column_constraint is as follows:
1 2 |
[ WITH ( {storage_parameter = value} [, ... ] ) ] [ USING INDEX TABLESPACE tablespace_name ] |
Parameter Description
- UNLOGGED
If this keyword is specified, the created table is an unlogged table. Data written to unlogged tables is not written to the WALs, which makes them considerably faster than ordinary tables. However, an unlogged table is automatically truncated after conflicts, operating system restart, database restart, primary/standby switchover, power-off, or abnormal shutdown, incurring data loss risks. The contents of an unlogged table are also not replicated to standby servers. Any indexes created on an unlogged table are not automatically logged as well.
Usage scenario: Unlogged tables do not ensure data security. Users can back up data before using unlogged tables; for example, users should back up the data before a system upgrade.
Troubleshooting: If data is missing in the indexes of unlogged tables due to some unexpected operations such as an unclean shutdown, users should re-create the indexes with errors.
- GLOBAL | LOCAL
When creating a temporary table, you can specify the GLOBAL or LOCAL keyword before TEMP or TEMPORARY. Currently, the two keywords are used to be compatible with the SQL standard. A local temporary table will be created by GaussDB regardless of whether GLOBAL or LOCAL is specified.
- TEMPORARY | TEMP
If TEMP or TEMPORARY is specified, the created table is a temporary table. A temporary table is automatically dropped at the end of the current session. Therefore, you can create and use temporary tables in the current session as long as the connected CN in the session is normal. Temporary tables are created only in the current session. If a DDL statement involves operations on temporary tables, a DDL error will be generated. Therefore, you are not advised to perform operations on temporary tables in DDL statements. TEMP is equivalent to TEMPORARY.
- Temporary tables are visible to the current session through the schema starting with pg_temp start. Users should not delete schema started with pg_temp or pg_toast_temp.
- If TEMPORARY or TEMP is not specified when you create a table but its schema is set to that starting with pg_temp_ in the current session, the table will be created as a temporary table.
- A temporary table is visible only to the current session. Therefore, it cannot be used together with \parallel on.
- Temporary tables do not support DN faults or primary/standby switchovers.
- IF NOT EXISTS
Sends a notice, but does not throw an error, if a table with the same name exists.
- table_name
Specifies the name of the table to be created.
Some processing logic of materialized views determines whether a table is the log table of a materialized view or a table associated with a materialized view based on the table name prefix. Therefore, do not create a table whose name prefix is mlog_ or matviewmap_. Otherwise, some functions of the table are affected.
- column_name
Specifies the name of a column to be created in the new table.
- data_type
Specifies the data type of the column.
- compress_mode
Specifies whether to compress a table column. The option specifies the algorithm preferentially used by table columns. Row-store tables do not support compression.
Value range: DELTA, PREFIX, DICTIONARY, NUMSTR, and NOCOMPRESS
- COLLATE collation
Assigns a collation to the column (which must be of a collatable data type). If no collation is specified, the default collation is used. You can run the select * from pg_collation command to query collation rules from the pg_collation system catalog. The default collation rule is the row starting with default in the query result.
- LIKE source_table [ like_option ... ]
Specifies a table from which the new table automatically inherits all column names, their data types, and their not-null constraints, as well as the default expression declared as serial.
The new table and the original table are decoupled after creation is complete. Changes to the original table will not be applied to the new table, and it is not possible to include data of the new table in scans of the original table.
Columns and constraints copied by LIKE are not merged with the same name. If the same name is specified explicitly or in another LIKE clause, an error is reported.
- The default expressions are copied from the original table to the new table only if INCLUDING DEFAULTS is specified. The default behavior is to exclude default expressions, resulting in the copied columns in the new table having default values null.
- The CHECK constraints are copied from the original table to the new table only when INCLUDING CONSTRAINTS is specified. Other types of constraints are never copied to the new table. Not-null constraints are always copied to the new table. These rules also apply to column constraints and table constraints.
- Any indexes on the original table will not be created on the new table, unless the INCLUDING INDEXES clause is specified.
- STORAGE settings for the copied column definitions are copied only if INCLUDING STORAGE is specified. The default behavior is to exclude STORAGE settings.
- If INCLUDING COMMENTS is specified, comments for the copied columns, constraints, and indexes are copied. The default behavior is to exclude comments.
- If INCLUDING PARTITION is specified, the partition definitions of the source table are copied to the new table, and the new table no longer uses the PARTITION BY clause. The default behavior is to exclude partition definition of the original table.
- If INCLUDING RELOPTIONS is specified, the new table will copy the storage parameter (that is, WITH clause) of the source table. The default behavior is to exclude partition definition of the storage parameter of the original table.
- If INCLUDING DISTRIBUTION is specified, the distribution information of the original table is copied to the new table, including distribution type and key, and the new table no longer use the DISTRIBUTE BY clause. The default behavior is to exclude distribution information of the original table.
- INCLUDING ALL contains the meaning of INCLUDING DEFAULTS, INCLUDING CONSTRAINTS, INCLUDING INDEXES, INCLUDING STORAGE, INCLUDING COMMENTS, INCLUDING PARTITION, INCLUDING RELOPTIONS, and INCLUDING DISTRIBUTION.
- If the source table contains a sequence with the SERIAL, BIGSERIAL, or SMALLSERIRAL data type, or a column in the source table is a sequence by default and the sequence is created for this table by using CREATE SEQUENCE... OWNED BY, these sequences will not be copied to the new table, and another sequence specific to the new table will be created. This is different from earlier versions. To share a sequence between the source table and new table, create a shared sequence (do not use OWNED BY) and set a column in the source table to this sequence.
- You are not advised to set a column in the source table to the sequence specific to another table especially when the table is distributed in specific node groups, because doing so may result in CREATE TABLE ... LIKE execution failures. In addition, doing so may cause the sequence to become invalid in the source sequence because the sequence will also be deleted from the source table when it is deleted from the table that the sequence is specific to. To share a sequence among multiple tables, you are advised to create a shared sequence for them.
- WITH ( { storage_parameter = value } [, ... ] )
Specifies an optional storage parameter for a table or an index. The WITH clause for a table can contain OIDS=TRUE or OIDS to specify that each row in the new table is assigned an OID. If OIDS=FALSE is specified, no OID is assigned.
When using Numeric of any precision to define a column, specifies precision p and scale s. When precision and scale are not specified, the input will be displayed.
The description of parameters is as follows:
- FILLFACTOR
The fill factor of a table is a percentage from 10 to 100. 100 (complete filling) is the default value. When a smaller fill factor is specified, INSERT operations pack table pages only to the indicated percentage. The remaining space on each page is reserved for updating rows on that page. This gives UPDATE a chance to place the updated copy of a row on the same page, which is more efficient than placing it on a different page. For a table whose entries are never updated, setting the fill factor to 100 (complete filling) is the best choice, but in heavily updated tables a smaller fill factor would be appropriate. The parameter has no meaning for column–store tables.
Value range: 10 to 100
- ORIENTATION
Storage mode (row store or column store) of table data. This parameter cannot be modified once it is set.
Value range:
- ROW indicates that table data is stored in rows.
ROW applies to the OLTP service, which has many interactive transactions. An interaction involves many columns in the table. Using ROW can improve the efficiency.
- COLUMN indicates that the data is stored in columns.
COLUMN applies to the data warehouse service, which has a large amount of aggregation computing, and involves a few column operations.
Default value:
If an ordinary tablespace is specified, the default is ROW.
- ROW indicates that table data is stored in rows.
- COMPRESSION
Specifies the compression level of table data. It determines the compression ratio and time. Generally, the higher the level of compression, the higher the ratio, the longer the time; and the lower the level of compression, the lower the ratio, the shorter the time. The actual compression ratio depends on the distribution mode of table data loaded. Row-store tables do not support compression.
Value range:
The valid values for column-store tables are YES, NO, LOW, MIDDLE, and HIGH, and the default value is LOW.
- COMPRESSLEVEL
Specifies the table data compression ratio and duration at the same compression level. This divides a compression level into sublevels, providing more choices for compression ratio and duration. As the value becomes greater, the compression ratio becomes higher and duration longer at the same compression level.
Value range: 0 to 3. The default value is 0.
- MAX_BATCHROW
Specifies the maximum number of rows in a storage unit during data loading. The parameter is only valid for column-store tables.
Value range: 10000 to 60000
- PARTIAL_CLUSTER_ROWS
Specifies the number of records to be partially clustered for storage during data loading. The parameter is only valid for column-store tables.
Value range: 600000 to 2147483647
- DELTAROW_THRESHOLD
Specifies the upper limit of to-be-imported rows for triggering the data import to a delta table when data of a column-store table is to be imported. This parameter takes effect only if enable_delta_store is set to on. The parameter is only valid for column-store tables.
Value range: 0 to 9999. The default value is 100.
- segment
The table data is stored in segmented paging mode. This parameter supports only row-store tables. Column-store tables, temporary tables, and unlogged tables are not supported. The Ustore storage engine is not supported.
Value range: on and off
Default value: off
- hashbucket
Creates a hash table that uses buckets. This parameter supports only row-store tables, including row-store range tables.
Value range: on and off
Default value: off
- In current version, DDL operations on hash bucket tables are affected. Therefore, you are not advised to frequently perform DDL operations on hash bucket tables.
- Hash bucket tables are bound to segmented paging storage, that is, when hashbucket is set to on, segment is set to on.
- bucketcnt
Specifies the number of buckets of a bucket table when the table is created. The value of this parameter must correspond to a child node group.
The value ranges from 32 to 16384 and must be an integer power of 2.
Default value: 16384.
- enable_tde
Creates a transparent encryption table. Before enabling this function, ensure that enable_tde in "GUC Parameter" has been enabled, the KMS service has been enabled, and the cluster master key ID in tde_cmk_id in "GUC Parameter" has been correctly configured. This parameter supports only row-store tables. Column-store tables and temporary tables are not supported. The Ustore storage engine is not supported.
Value range: on and off If this parameter is set to on, transparent data encryption is enabled. If this parameter is set to off, transparent data encryption is disabled but the encryption function will be enabled later. When a table is created, a data encryption key will be applied from KMS.
Default value: off
- encrypt_algo
Specifies the transparent data encryption algorithm. Before enabling this function, ensure that enable_tde must be set for a table. The encryption algorithm can be specified only when a table is created. Different tables support different encryption algorithms. After the table is created, the encryption algorithm cannot be changed.
Value range: a string. The value can be AES_128_CTR or SM4_CTR.
If enable_tde is not set, the default value is null. If enable_tde is set to on or off and encrypt_algo is not set, the value is AES_128_CTR.
- parallel_workers
Number of bgworker threads started when an index is created. For example, value 2 indicates that two bgworker threads are started to create indexes concurrently.
Value range: [0,32], int type. The value 0 indicates that this function is disabled.
Default value: If this parameter is not set, the concurrent index creation function is disabled.
- dek_cipher
Ciphertext of the key used for transparent data encryption. When enable_tde is enabled, the system automatically applies for ciphertext creation. You cannot specify the ciphertext. The key rotation function can be used to update the key.
Value range: a string.
If encryption is disabled, the default value is null by default.
- cmk_id
ID of the cluster master key used for transparent data encryption. When enable_tde is enabled, the value is obtained from tde_cmk_id in "GUC Parameter" and cannot be specified or modified by users.
Value range: a string.
If encryption is disabled, the default value is null by default.
- hasuids
If this parameter is set to on, a unique table-level ID is allocated to a tuple when the tuple is updated.
Value range: on and off
Default value: off
- FILLFACTOR
- WITHOUT OIDS
It is equivalent to WITH (OIDS=FALSE).
- ON COMMIT { PRESERVE ROWS | DELETE ROWS }
ON COMMIT determines what to do when you commit a temporary table creation operation. Currently, the PRESERVE ROWS and DELETE ROWS options are supported.
- PRESERVE ROWS (default): No special action is taken at the ends of transactions. The temporary table and its table data are unchanged.
- DELETE ROWS: All rows in the temporary table will be deleted at the end of each transaction block.
- COMPRESS | NOCOMPRESS
If you specify COMPRESS in the CREATE TABLE statement, the compression feature is triggered in case of a bulk INSERT operation. If this feature is enabled, a scan is performed for all tuple data within the page to generate a dictionary and then the tuple data is compressed and stored. If NOCOMPRESS is specified, the table is not compressed. Row-store tables do not support compression.
Default value: NOCOMPRESS, that is, tuple data is not compressed before storage.
- TABLESPACE tablespace_name
Specifies the tablespace where the new table is created. If not specified, the default tablespace is used.
- DISTRIBUTE BY
Specifies how the table is distributed or replicated between DNs.
Value range:
- REPLICATION: Each row in the table exists on all DNs, that is, each DN has complete table data.
- HASH (column_name): Each row of the table will be placed into specified DNs based on the hash value of the specified column.
- RANGE(column_name): maps a specified column based on the range and distributes data to the corresponding DNs.
- LIST(column_name): maps a specified column based on a specific value and distributes data to the corresponding DNs.
- When DISTRIBUTE BY { HASH | RANGE | LIST } (column_name) is specified, the primary key and its unique index must contain the column_name column.
- When DISTRIBUTE BY { HASH | RANGE | LIST } (column_name) is specified for a referenced table, the foreign key of the referencing table must contain the column_name column.
- For a RANGE distribution policy using the VALUE LESS THAN clause, a maximum of four distribution key columns are supported. The distribution rules are as follows:
1. The comparison starts from the first column of values to be inserted.
2. If the value of the first column is smaller than the boundary value of the current column in the local shard, values are directly inserted.
3. If the value of the first column is equal to the boundary value of the current column in the local shard, compare the value of the second column with the boundary value of the next column in the local shard. If the value of the second column is smaller than the boundary value of the next column in the local shard, values are directly inserted. If they are still equal, continue to the comparison until the value of the column is smaller than the boundary value of the column in the local shard, and then insert the values.
4. If the values of the all columns are greater than the boundary value of the current column in the local shard, compare the value with that in the next shard.
Default value: HASH(column_name). Set column_name to the primary key (if any) of the table or the column whose first data type supports distribution keys.
column_name supports the following data types:
- Integer types: TINYINT, SMALLINT, INT, BIGINT, and NUMERIC/DECIMAL
- Character types: CHAR, BPCHAR, VARCHAR, VARCHAR2, NVARCHAR2, and TEXT
- Date/time types: DATE, TIME, TIMETZ, TIMESTAMP, TIMESTAMPTZ, INTERVAL, and SMALLDATETIME
When you create a table, the choices of distribution keys and partition keys have major impact on SQL query performance. Therefore, select appropriate distribution keys and partition keys with strategies.
- Select appropriate distribution keys.
A hash table's distribution key should evenly distribute data on each DN to prevent skewing the data or distributing it unevenly across DNs. Determine appropriate distribution keys based on the following principles:
- Determine whether data is skewed.
Connect to the database and run the following statement to check the number of tuples on each each DN. Replace tablename with the actual name of the table to be analyzed.
openGauss=# SELECT a.count,b.node_name FROM (SELECT count(*) AS count,xc_node_id FROM tablename GROUP BY xc_node_id) a, pgxc_node b WHERE a.xc_node_id=b.node_id ORDER BY a.count DESC;
If tuple numbers vary greatly (several times or tenfold) in each each DN, a data skew occurs. Change the data distribution key based on the following principle:
- Recreate a table to change its distribution keys. ALTER TABLE cannot change distribution keys. Therefore, you need to recreate a table when changing its distribution keys.
Principles for selecting distribution keys are as follows:
The value of the distribution key should be discrete so that data can be evenly distributed on each DN. You can select the primary key of the table as the distribution key. For example, for a person information table, choose the ID card number column as the distribution key.
With the above principle met, you can select join conditions as distribution keys so that join tasks can be pushed down to DNs, reducing the amount of data transferred between the DNs.
- Determine whether data is skewed.
- Select appropriate partition keys.
In range partitioning, a table is partitioned based on ranges defined by one or more columns, with no overlap between the ranges of values assigned to different partitions. Each range has a dedicated partition for data storage.
Modify partition keys to make the query result stored in the same or least partitions (partition pruning). Obtain consecutive I/O to improve the query performance.
In actual services, time is used to filter query objects. Therefore, you can use time as a partition key, and change the key value based on the total data volume and data volume of a single query.
- RANGE/LIST distribution
If no DN is specified for the shards of a RANGE/LIST distribution table, the database uses the Round Robin algorithm to allocate DNs to the shards. In addition, if RANGE/LIST distribution is used, you are advised to define as many shards as possible when creating a table for future capacity expansion. If the defined number of shards is less than the number of DNs before scale-out, data redistribution cannot be performed on new DNs. Note that the sharding rules are designed by users. In some extreme cases, scale-out may not solve the problem of insufficient storage space.
- TO { GROUP groupname | NODE ( nodename [, ... ] ) }
TO GROUP specifies the node group to which the table to be created belongs. TO NODE is used for internal scale-out tools.
- CONSTRAINT constraint_name
Specifies the name of a column or table constraint. The optional constraint clauses specify constraints that new or updated rows must satisfy for an INSERT or UPDATE operation to succeed.
There are two ways to define constraints:
- A column constraint is defined as part of a column definition, and it is bound to a particular column.
- A table constraint is not bound to a particular column but can apply to more than one column.
- NOT NULL
The column is not allowed to contain null values.
- NULL
The column is allowed to contain null values. This is the default setting.
This clause is only provided for compatibility with non-standard SQL databases. It is not recommended.
- CHECK ( expression )
Specifies an expression producing a Boolean result where the INSERT or UPDATE operation of new or updated rows can succeed only when the expression result is TRUE or UNKNOWN; otherwise, an error is thrown and the database is not altered.
A check constraint specified as a column constraint should reference only the column's values, while an expression appearing in a table constraint can reference multiple columns.
<>NULL and !=NULL are invalid in an expression. Change them to is NOT NULL.
- DEFAULT default_expr
Assigns a default data value for a column. The value can be any variable-free expressions. (Subqueries and cross-references to other columns in the current table are not allowed.) The data type of the default expression must match the data type of the column.
The default expression will be used in any insert operation that does not specify a value for the column. If there is no default value for a column, then the default value is null.
- UNIQUE index_parameters
UNIQUE ( column_name [, ... ] ) index_parameters
Specifies that a group of one or more columns of a table can contain only unique values.
For the purpose of a unique constraint, null is not considered equal.
If DISTRIBUTE BY REPLICATION is not specified, the column table that contains only unique values must contain distribution keys.
- PRIMARY KEY index_parameters
PRIMARY KEY ( column_name [, ... ] ) index_parameters
Specifies that a column or columns of a table can contain only unique (non-duplicate) and non-null values.
Only one primary key can be specified for a table.
If DISTRIBUTE BY REPLICATION is not specified, the column set with a primary key constraint must contain distribution keys.
- REFERENCES
The distributed database of the current version does not support the REFERENCES clause.
- DEFERRABLE | NOT DEFERRABLE
Controls whether the constraint can be deferred. A constraint that is not deferrable will be checked immediately after every command. Checking of constraints that are deferrable can be postponed until the end of the transaction using the SET CONSTRAINTS command. NOT DEFERRABLE is the default value. Currently, only UNIQUE and PRIMARY KEY constraints accept this clause. All the other constraints are not deferrable.
- PARTIAL CLUSTER KEY
Specifies a partial cluster key for storage. When importing data to a column-store table, you can perform local data sorting by specified columns (single or multiple).
- INITIALLY IMMEDIATE | INITIALLY DEFERRED
If a constraint is deferrable, this clause specifies the default time to check the constraint.
- If the constraint is INITIALLY IMMEDIATE (default value), it is checked after each statement.
- If the constraint is INITIALLY DEFERRED, it is checked only at the end of the transaction.
The constraint check time can be altered using the SET CONSTRAINTS statement.
- USING INDEX TABLESPACE tablespace_name
Allows selection of the tablespace in which the index associated with a UNIQUE or PRIMARY KEY constraint will be created. If not specified, default_tablespace is consulted, or the default tablespace in the database if default_tablespace is empty.
- ENCRYPTION_TYPE = encryption_type_value
For the encryption type in the ENCRYPTED WITH constraint, the value of encryption_type_value is DETERMINISTIC or RANDOMIZED.
Examples
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
-- Create a simple table. openGauss=# CREATE TABLE tpcds.warehouse_t1 ( W_WAREHOUSE_SK INTEGER NOT NULL, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) , W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2) ); openGauss=# CREATE TABLE tpcds.warehouse_t2 ( W_WAREHOUSE_SK INTEGER NOT NULL, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) , W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) DICTIONARY, W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2) ); |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
-- Create a table and set the default value of the W_STATE column to GA. openGauss=# CREATE TABLE tpcds.warehouse_t3 ( W_WAREHOUSE_SK INTEGER NOT NULL, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) , W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) DEFAULT 'GA', W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2) ); -- Create a table and check whether the W_WAREHOUSE_NAME column is unique at the end of its creation. openGauss=# CREATE TABLE tpcds.warehouse_t4 ( W_WAREHOUSE_SK INTEGER NOT NULL, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) UNIQUE DEFERRABLE, W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2) ); |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 |
-- Create a table with its fill factor set to 70%. openGauss=# CREATE TABLE tpcds.warehouse_t5 ( W_WAREHOUSE_SK INTEGER NOT NULL, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) , W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2), UNIQUE(W_WAREHOUSE_NAME) WITH(fillfactor=70) ); -- Alternatively, user the following syntax: openGauss=# CREATE TABLE tpcds.warehouse_t6 ( W_WAREHOUSE_SK INTEGER NOT NULL, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) UNIQUE, W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2) ) WITH(fillfactor=70); -- Create a table and specify that its data is not written to WALs. openGauss=# CREATE UNLOGGED TABLE tpcds.warehouse_t7 ( W_WAREHOUSE_SK INTEGER NOT NULL, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) , W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2) ); -- Create a temporary table. openGauss=# CREATE TEMPORARY TABLE warehouse_t24 ( W_WAREHOUSE_SK INTEGER NOT NULL, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) , W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2) ); -- Create a temporary table in a transaction and specify that this table is deleted when the transaction is committed. openGauss=# CREATE TEMPORARY TABLE warehouse_t25 ( W_WAREHOUSE_SK INTEGER NOT NULL, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) , W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2) ) ON COMMIT DELETE ROWS; -- Create a table and specify that no error is reported for duplicate tables (if any). openGauss=# CREATE TABLE IF NOT EXISTS tpcds.warehouse_t8 ( W_WAREHOUSE_SK INTEGER NOT NULL, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) , W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2) ); -- Create a general tablespace. openGauss=# CREATE TABLESPACE DS_TABLESPACE1 RELATIVE LOCATION 'tablespace/tablespace_1'; -- Specify a tablespace when creating a table. openGauss=# CREATE TABLE tpcds.warehouse_t9 ( W_WAREHOUSE_SK INTEGER NOT NULL, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) , W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2) ) TABLESPACE DS_TABLESPACE1; -- Separately specify the index tablespace for W_WAREHOUSE_NAME when creating the table. openGauss=# CREATE TABLE tpcds.warehouse_t10 ( W_WAREHOUSE_SK INTEGER NOT NULL, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) UNIQUE USING INDEX TABLESPACE DS_TABLESPACE1, W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2) ); |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 |
-- Create a table with a primary key constraint. openGauss=# CREATE TABLE tpcds.warehouse_t11 ( W_WAREHOUSE_SK INTEGER PRIMARY KEY, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) , W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2) ); -- An alternative for the preceding syntax is as follows: openGauss=# CREATE TABLE tpcds.warehouse_t12 ( W_WAREHOUSE_SK INTEGER NOT NULL, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) , W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2), PRIMARY KEY(W_WAREHOUSE_SK) ); -- Or use the following statement to specify the name of the constraint: openGauss=# CREATE TABLE tpcds.warehouse_t13 ( W_WAREHOUSE_SK INTEGER NOT NULL, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) , W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2), CONSTRAINT W_CSTR_KEY1 PRIMARY KEY(W_WAREHOUSE_SK) ); -- Create a table with a compound primary key constraint. openGauss=# CREATE TABLE tpcds.warehouse_t14 ( W_WAREHOUSE_SK INTEGER NOT NULL, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) , W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2), CONSTRAINT W_CSTR_KEY2 PRIMARY KEY(W_WAREHOUSE_SK, W_WAREHOUSE_ID) ); -- Create a column-store table. openGauss=# CREATE TABLE tpcds.warehouse_t15 ( W_WAREHOUSE_SK INTEGER NOT NULL, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) , W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2) ) WITH (ORIENTATION = COLUMN); -- Create a column-store table using partial clustered storage. openGauss=# CREATE TABLE tpcds.warehouse_t16 ( W_WAREHOUSE_SK INTEGER NOT NULL, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) , W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2), PARTIAL CLUSTER KEY(W_WAREHOUSE_SK, W_WAREHOUSE_ID) ) WITH (ORIENTATION = COLUMN); -- Define a column-store table with compression enabled. openGauss=# CREATE TABLE tpcds.warehouse_t17 ( W_WAREHOUSE_SK INTEGER NOT NULL, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) , W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2) ) WITH (ORIENTATION = COLUMN, COMPRESSION=HIGH); -- Define a table with compression enabled. openGauss=# CREATE TABLE tpcds.warehouse_t18 ( W_WAREHOUSE_SK INTEGER NOT NULL, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) , W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2) ) COMPRESS; -- Define a column check constraint. openGauss=# CREATE TABLE tpcds.warehouse_t19 ( W_WAREHOUSE_SK INTEGER PRIMARY KEY CHECK (W_WAREHOUSE_SK > 0), W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) CHECK (W_WAREHOUSE_NAME IS NOT NULL), W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2) ); openGauss=# CREATE TABLE tpcds.warehouse_t20 ( W_WAREHOUSE_SK INTEGER PRIMARY KEY, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) CHECK (W_WAREHOUSE_NAME IS NOT NULL), W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2), CONSTRAINT W_CONSTR_KEY2 CHECK(W_WAREHOUSE_SK > 0 AND W_WAREHOUSE_NAME IS NOT NULL) ); -- Define a table with each row stored in all DNs. openGauss=# CREATE TABLE tpcds.warehouse_t21 ( W_WAREHOUSE_SK INTEGER NOT NULL, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) , W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2) )DISTRIBUTE BY REPLICATION; Enable the primarynode option of the replication table. openGauss=# ALTER TABLE tpcds.warehouse_t21 SET (primarynode=on); Check whether the option is enabled. (The content displayed in Options varies according to the version.) openGauss=# \d+ tpcds.warehouse_t21 Table "tpcds.warehouse_t21" Column | Type | Modifiers | Storage | Stats target | Description -------------------+-----------------------+-----------+----------+--------------+------------- w_warehouse_sk | integer | not null | plain | | w_warehouse_id | character(16) | not null | extended | | w_warehouse_name | character varying(20) | | extended | | w_warehouse_sq_ft | integer | | plain | | w_street_number | character(10) | | extended | | w_street_name | character varying(60) | | extended | | w_street_type | character(15) | | extended | | w_suite_number | character(10) | | extended | | w_city | character varying(60) | | extended | | w_county | character varying(30) | | extended | | w_state | character(2) | | extended | | w_zip | character(10) | | extended | | w_country | character varying(20) | | extended | | w_gmt_offset | numeric(5,2) | | main | | Has OIDs: no Distribute By: REPLICATION Location Nodes: ALL DATANODES Options: compression=no, primarynode=on -- Define a hash table. openGauss=# CREATE TABLE tpcds.warehouse_t22 ( W_WAREHOUSE_SK INTEGER NOT NULL, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) , W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2), CONSTRAINT W_CONSTR_KEY3 UNIQUE(W_WAREHOUSE_SK) )DISTRIBUTE BY HASH(W_WAREHOUSE_SK); -- View DN information. gaussdb=# select node_name from pgxc_node; node_name -------------- coordinator1 datanode1 datanode2 datanode3 datanode4 datanode5 datanode6 (7 rows) -- Define a table using RANGE distribution. openGauss=# CREATE TABLE tpcds.warehouse_t26 ( W_WAREHOUSE_SK INTEGER NOT NULL, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) , W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2) )DISTRIBUTE BY RANGE(W_WAREHOUSE_ID) ( SLICE s1 VALUES LESS THAN (10) DATANODE datanode1, SLICE s2 VALUES LESS THAN (20) DATANODE datanode2, SLICE s3 VALUES LESS THAN (30) DATANODE datanode3, SLICE s4 VALUES LESS THAN (MAXVALUE) DATANODE datanode4 ); -- Example of a multi-column range partitioning policy openGauss=# create table t_ran1(c1 int, c2 int, c3 int, c4 int, c5 int) distribute by range(c1,c2) ( SLICE s1 VALUES LESS THAN (10,10) DATANODE datanode1, SLICE s2 VALUES LESS THAN (10,20) DATANODE datanode2, SLICE s3 VALUES LESS THAN (20,10) DATANODE datanode3 ); openGauss=# insert into t_ran1 values(9,5,'a'); openGauss=# insert into t_ran1 values(9,20,'a'); openGauss=# insert into t_ran1 values(9,21,'a'); openGauss=# insert into t_ran1 values(10,5,'a'); openGauss=# insert into t_ran1 values(10,15,'a'); openGauss=# insert into t_ran1 values(10,20,'a'); openGauss=# insert into t_ran1 values(10,21,'a'); openGauss=# insert into t_ran1 values(11,5,'a'); openGauss=# insert into t_ran1 values(11,20,'a'); openGauss=# insert into t_ran1 values(11,21,'a'); openGauss=# select node_name,node_type,node_id from pgxc_node; node_name | node_type | node_id --------------+-----------+------------- coordinator1 | C | 1938253334 datanode1 | D | 888802358 datanode2 | D | -905831925 datanode3 | D | -1894792127 (4 rows) openGauss=# select xc_node_id,* from t_ran1; xc_node_id | c1 | c2 | c3 | c4 | c5 -------------+----+----+----+----+---- 888802358 | 9 | 5 | 0 | | 888802358 | 9 | 20 | 0 | | 888802358 | 9 | 21 | 0 | | 888802358 | 10 | 5 | 0 | | -905831925 | 10 | 15 | 0 | | -1894792127 | 10 | 20 | 0 | | -1894792127 | 10 | 21 | 0 | | -1894792127 | 11 | 5 | 0 | | -1894792127 | 11 | 20 | 0 | | -1894792127 | 11 | 21 | 0 | | (10 rows) -- Create a table using SLICE REFERENCES. openGauss=# CREATE TABLE tpcds.warehouse_t27 ( W_WAREHOUSE_SK INTEGER NOT NULL, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) , W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2) )DISTRIBUTE BY RANGE(W_WAREHOUSE_ID) SLICE REFERENCES warehouse_t26; -- Define a table using LIST distribution. openGauss=# CREATE TABLE tpcds.warehouse_t28 ( W_WAREHOUSE_SK INTEGER NOT NULL, W_WAREHOUSE_ID CHAR(16) NOT NULL, W_WAREHOUSE_NAME VARCHAR(20) , W_WAREHOUSE_SQ_FT INTEGER , W_STREET_NUMBER CHAR(10) , W_STREET_NAME VARCHAR(60) , W_STREET_TYPE CHAR(15) , W_SUITE_NUMBER CHAR(10) , W_CITY VARCHAR(60) , W_COUNTY VARCHAR(30) , W_STATE CHAR(2) , W_ZIP CHAR(10) , W_COUNTRY VARCHAR(20) , W_GMT_OFFSET DECIMAL(5,2) )DISTRIBUTE BY LIST(W_COUNTRY) ( SLICE s1 VALUES ('USA') DATANODE datanode1, SLICE s2 VALUES ('CANADA') DATANODE datanode2, SLICE s3 VALUES ('UK') DATANODE datanode3, SLICE s4 VALUES (DEFAULT) DATANODE datanode4 ); -- Add a varchar column to the tpcds.warehouse_t19 table. |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 |
openGauss=# ALTER TABLE tpcds.warehouse_t19 ADD W_GOODS_CATEGORY varchar(30); -- Add a check constraint to the tpcds.warehouse_t19 table. openGauss=# ALTER TABLE tpcds.warehouse_t19 ADD CONSTRAINT W_CONSTR_KEY4 CHECK (W_STATE IS NOT NULL); -- Use one statement to alter the types of two existing columns. openGauss=# ALTER TABLE tpcds.warehouse_t19 ALTER COLUMN W_GOODS_CATEGORY TYPE varchar(80), ALTER COLUMN W_STREET_NAME TYPE varchar(100); -- This statement is equivalent to the preceding statement. openGauss=# ALTER TABLE tpcds.warehouse_t19 MODIFY (W_GOODS_CATEGORY varchar(30), W_STREET_NAME varchar(60)); -- Add a not-null constraint to an existing column. openGauss=# ALTER TABLE tpcds.warehouse_t19 ALTER COLUMN W_GOODS_CATEGORY SET NOT NULL; -- Remove not-null constraints from an existing column. openGauss=# ALTER TABLE tpcds.warehouse_t19 ALTER COLUMN W_GOODS_CATEGORY DROP NOT NULL; -- If no partial cluster is specified in a column-store table, add a partial cluster to the table. openGauss=# ALTER TABLE tpcds.warehouse_t17 ADD PARTIAL CLUSTER KEY(W_WAREHOUSE_SK); -- View the constraint name and delete the partial cluster column of a column-store table. openGauss=# \d+ tpcds.warehouse_t17 Table "tpcds.warehouse_t17" Column | Type | Modifiers | Storage | Stats target | Description -------------------+-----------------------+-----------+----------+--------------+------------- w_warehouse_sk | integer | not null | plain | | w_warehouse_id | character(16) | not null | extended | | w_warehouse_name | character varying(20) | | extended | | w_warehouse_sq_ft | integer | | plain | | w_street_number | character(10) | | extended | | w_street_name | character varying(60) | | extended | | w_street_type | character(15) | | extended | | w_suite_number | character(10) | | extended | | w_city | character varying(60) | | extended | | w_county | character varying(30) | | extended | | w_state | character(2) | | extended | | w_zip | character(10) | | extended | | w_country | character varying(20) | | extended | | w_gmt_offset | numeric(5,2) | | main | | Partial Cluster : "warehouse_t17_cluster" PARTIAL CLUSTER KEY (w_warehouse_sk) Has OIDs: no Distribute By: HASH(w_warehouse_sk) Location Nodes: ALL DATANODES Options: compression=no, version=0.12 openGauss=# ALTER TABLE tpcds.warehouse_t17 DROP CONSTRAINT warehouse_t17_cluster; -- Move a table to another tablespace. openGauss=# ALTER TABLE tpcds.warehouse_t19 SET TABLESPACE PG_DEFAULT; -- Create the joe schema. openGauss=# CREATE SCHEMA joe; -- Move a table to another schema. openGauss=# ALTER TABLE tpcds.warehouse_t19 SET SCHEMA joe; -- Rename an existing table. openGauss=# ALTER TABLE joe.warehouse_t19 RENAME TO warehouse_t23; -- Delete a column from the warehouse_t23 table. openGauss=# ALTER TABLE joe.warehouse_t23 DROP COLUMN W_STREET_NAME; -- Create an encryption table. openGauss=# CREATE TABLE creditcard_info (id_number int, name text encrypted with (column_encryption_key = ImgCEK, encryption_type = DETERMINISTIC), credit_card varchar(19) encrypted with (column_encryption_key = ImgCEK1, encryption_type = DETERMINISTIC)); NOTICE: The 'DISTRIBUTE BY' clause is not specified. Using 'id_number' as the distribution column by default. HINT: Please use 'DISTRIBUTE BY' clause to specify suitable data distribution column. CREATE TABLE -- Delete the tablespace, schema joe, and schema tables warehouse. openGauss=# DROP TABLE tpcds.warehouse_t1; openGauss=# DROP TABLE tpcds.warehouse_t2; openGauss=# DROP TABLE tpcds.warehouse_t3; openGauss=# DROP TABLE tpcds.warehouse_t4; openGauss=# DROP TABLE tpcds.warehouse_t5; openGauss=# DROP TABLE tpcds.warehouse_t6; openGauss=# DROP TABLE tpcds.warehouse_t7; openGauss=# DROP TABLE tpcds.warehouse_t8; openGauss=# DROP TABLE tpcds.warehouse_t9; openGauss=# DROP TABLE tpcds.warehouse_t10; openGauss=# DROP TABLE tpcds.warehouse_t11; openGauss=# DROP TABLE tpcds.warehouse_t12; openGauss=# DROP TABLE tpcds.warehouse_t13; openGauss=# DROP TABLE tpcds.warehouse_t14; openGauss=# DROP TABLE tpcds.warehouse_t15; openGauss=# DROP TABLE tpcds.warehouse_t16; openGauss=# DROP TABLE tpcds.warehouse_t17; openGauss=# DROP TABLE tpcds.warehouse_t18; openGauss=# DROP TABLE tpcds.warehouse_t20; openGauss=# DROP TABLE tpcds.warehouse_t21; openGauss=# DROP TABLE tpcds.warehouse_t22; openGauss=# DROP TABLE joe.warehouse_t23; openGauss=# DROP TABLE tpcds.warehouse_t24; openGauss=# DROP TABLE tpcds.warehouse_t25; openGauss=# DROP TABLE tpcds.warehouse_t26; openGauss=# DROP TABLE tpcds.warehouse_t27; openGauss=# DROP TABLE tpcds.warehouse_t28; openGauss=# DROP TABLE creditcard_info; openGauss=# DROP TABLESPACE DS_TABLESPACE1; openGauss=# DROP SCHEMA IF EXISTS joe CASCADE; |
Helpful Links
Suggestions
- UNLOGGED
- The unlogged table and its indexes do not use the WAL log mechanism during data writing. Their write speed is much higher than that of ordinary tables. Therefore, they can be used for storing intermediate result sets of complex queries to improve query performance.
- The unlogged table has no primary/standby mechanism. In case of system faults or abnormal breakpoints, data loss may occur. Therefore, the unlogged table cannot be used to store basic data.
- TEMPORARY | TEMP
- A temporary table is automatically dropped at the end of a session.
- The temporary table is visible only to the current CN.
- LIKE
- The new table automatically inherits all column names, data types, and not-null constraints from this table. The new table is irrelevant to the original table after the creation.
- LIKE INCLUDING DEFAULTS
- The default expressions are copied from the original table to the new table only if INCLUDING DEFAULTS is specified. The default behavior is to exclude default expressions, resulting in the copied columns in the new table having default values null.
- LIKE INCLUDING CONSTRAINTS
- The CHECK constraints are copied from the original table to the new table only when INCLUDING CONSTRAINTS is specified. Other types of constraints are never copied to the new table. Not-null constraints are always copied to the new table. These rules also apply to column constraints and table constraints.
- LIKE INCLUDING INDEXES
- Any indexes on the original table will not be created on the new table, unless the INCLUDING INDEXES clause is specified.
- LIKE INCLUDING STORAGE
- STORAGE settings for the copied column definitions are copied only if INCLUDING STORAGE is specified. The default behavior is to exclude STORAGE settings.
- LIKE INCLUDING COMMENTS
- If INCLUDING COMMENTS is specified, comments for the copied columns, constraints, and indexes are copied. The default behavior is to exclude comments.
- LIKE INCLUDING PARTITION
- If INCLUDING PARTITION is specified, the partition definitions of the source table are copied to the new table, and the new table no longer uses the PARTITION BY clause. The default behavior is to exclude partition definition of the original table.
- LIKE INCLUDING RELOPTIONS
- If INCLUDING RELOPTIONS is specified, the new table will copy the storage parameter (that is, WITH clause) of the source table. The default behavior is to exclude partition definition of the storage parameter of the original table.
- LIKE INCLUDING DISTRIBUTION
- If INCLUDING DISTRIBUTION is specified, the distribution information of the original table is copied to the new table, including distribution type and key, and the new table no longer use the DISTRIBUTE BY clause. The default behavior is to exclude distribution information of the original table.
- LIKE INCLUDING ALL
- INCLUDING ALL contains the meaning of INCLUDING DEFAULTS, INCLUDING CONSTRAINTS, INCLUDING INDEXES, INCLUDING STORAGE, INCLUDING COMMENTS, INCLUDING PARTITION, INCLUDING RELOPTIONS, and INCLUDING DISTRIBUTION.
- ORIENTATION ROW
- Creates a row-store table. Row-store applies to the OLTP service, which has many interactive transactions. An interaction involves many columns in the table. Using row-store can improve the efficiency.
- ORIENTATION COLUMN
- Creates a column-store table. Column store applies to the data warehouse service, which has a large amount of aggregation computing, and involves a few column operations.
- DISTRIBUTE BY
- It is recommended that a fact table or dimension table containing a large amount of data be created as a distribution table. Each row of the table will be placed into specified DNs based on the hash value of the specified column. The syntax is distribute by hash (column_name).
- It is recommended that a dimension table containing a small amount of data be created as a replication table. Each row in the table exists on all DNs. That is, each DN has complete table data. The syntax is distribute by replication.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot