Updated on 2025-05-29 GMT+08:00

ALTER TABLE

Description

Modifies tables, including changing table definitions, renaming tables, renaming specified columns in tables, renaming table constraints, setting table schemas, enabling or disabling row-level security policies, and adding or updating multiple columns.

Precautions

  • The owner of a table, users with the ALTER permission on the table, or users with the ALTER ANY TABLE permission can run the ALTER TABLE command. System administrators have the permission to run the command by default. To change the owner or schema of a table, you must be the table owner or system administrator and a member of the new owner role.
  • The tablespace of a partitioned table cannot be modified, but the tablespace of the partition can be modified.
  • The storage parameter ORIENTATION cannot be modified.
  • Currently, SET SCHEMA can only set schemas to user schemas. It cannot set a schema to a system internal schema.
  • The partition key columns of a partitioned table cannot be changed or the character set cannot be converted.
  • A column whose DEFAULT value contains the nextval() expression cannot be added.
  • Row-level security cannot be enabled for foreign tables and temporary tables.
  • When you delete a PRIMARY KEY constraint by constraint name, the NOT NULL constraint is not deleted. If necessary, manually delete the NOT NULL constraint.
  • When JDBC is used, the DEFAULT value can be set through PreparedStatement.
  • If you add a column using ADD COLUMN, all existing rows in the table are initialized to the column's default value (NULL if no DEFAULT value is specified).

    If no DEFAULT value is specified for the new column, NULL is used, and no full table update is triggered.

    If the new column has the DEFAULT value, the column must meet all the following requirements. Otherwise, the entire table is updated, leading to additional overheads and affecting online services.

    1. The data type is BOOL, BYTEA, TINYINT, SMALLINT, BIGINT, INTEGER, NUMERIC, FLOAT, DOUBLE PRECISION, CHAR, VARCHAR, TEXT, TIMESTAMPTZ, TIMESTAMP, DATE, TIME, TIMETZ, INTERVAL, SERIAL, BIGSERIAL, SMALLSERIAL, FLOATVECTOR, or BOOLVECTOR.

    2. The length of the DEFAULT value of the added column cannot exceed 128 bytes.

    3. The DEFAULT value of the added column does not contain the volatile function.

    4. The DEFAULT value is required and cannot be NULL.

    If you are not sure whether condition 3 is met, check whether the provolatile attribute of the function in the PG_RPOC system catalog is v.

    5. If the PG-compatible mode is enabled, the data type of the new column can be SERIAL, BIGSERIAL, or SMALLSERIAL. Only ordinary tables are supported, and partitioned tables, temporary tables, unlogged permanent tables are not supported. Astore and Ustore are supported.

  • If FIRST | AFTER column_name is used to change the character set of a column, add or change a column, the update overhead of entire table is generated and affects online services.
  • If you add a row-level expression when adding or changing an ILM policy for a data object, note that the row-level expression supports only the functions listed in the whitelist. For details about the whitelist function list, see Row Expression Function Whitelist.

    Compatibility parameters may affect the output of some functions supported by the ON(EXPR) row-level expression of the ILM policy. For example, after b_format_version and b_format_dev_version are set to '5.7' and 's2' respectively for the upper function in B-compatible mode, uppercase conversion will be unavailable.

  • The number of table constraints cannot exceed 32,767.
  • When performing online DDL operations, reserve sufficient disk space. That is, the remaining space must be the total space of the tables to be operated and the accessories such as indexes on the tables.
  • Online DDL operations cannot be performed in a transaction or a stored procedure.
  • If online DDL execution is canceled, the residual data clearance process starts first. If the execution is canceled again, the clearance process ends and residual data remains.
  • Resource contention exists between online DDL operations and user services. Therefore, online DDL operations are not suitable for heavy-traffic scenarios. In scenarios where resources are limited, fine-grained resource control is recommended. Parallel acceleration is not recommended.
  • Online DDL operations are not supported in M-compatible mode.
  • In the following cases, online DDL does not take effect. By default, DDL is executed in offline mode. If the ONLINE keyword exists, a notice is displayed, indicating that the keyword ONLINE is ignored.
    • If the clause contains any keyword such as CHARSET, COLLATE, FIRST, or AFTER or column_constraint, the MODIFY [ COLUMN ] column_name data_type [ CHARACTER SET | CHARSET charset ] [{[ COLLATE collation ] | [ column_constraint ]} [ ... ] ] [FIRST | AFTER column_name] or MODIFY ( { column_name data_type | column_name [ CONSTRAINT constraint_name ] NOT NULL [ ENABLE ] | column_name [ CONSTRAINT constraint_name ] NULL } [, ...] ) syntax is used to change the column type, precision, or length online.
    • The following syntax is used to change the column type, precision, and length online: CHANGE [ COLUMN ] column_name new_column_name data_type [ CHARACTER SET | CHARSET charset ] [{[ COLLATE collation ] | [ column_constraint ]} [ ... ] ] [FIRST | AFTER column_name].
    • A column type is changed or converted to the SET data type online.
    • The ALTER TABLE statement contains operations that are not supported by the current online DDL feature. (The supported online DDL operations include changing column types and precisions, reducing lengths, moving table-level tablespaces, and adding NOT NULL, CHECK, PRIMARY KEY, or UNIQUE constraints.)
  • Online DDL operations do not take effect when they are performed on the database-level, index-level, partitioned, secondary partitioned, segment-page, hash bucket, temporary, unlogged, or HTAP tables. (These operations include changing the column type, precision, and length online, moving table-level tablespaces online, and adding NOT NULL, CHECK, PRIMARY KEY, and UNIQUE constraints online.) By default, the DDL operations are executed offline, and a notice is displayed, indicating that the current DDL statement cannot be executed online.
  • Long transactions may block online DDL operations (including changing the column type, precision, and length online, moving table-level tablespaces online, and adding NOT NULL, CHECK, PRIMARY KEY, and UNIQUE constraints online). Therefore, you are advised not to perform online DDL operations when long transactions exist.
  • At the later stage of execution, online DDL operations will block DQL operations and report errors for concurrent DML operations. (The preceding online DDL operations include changing the column type, precision, and length online, moving table-level tablespaces online, and adding NOT NULL, CHECK, PRIMARY KEY, and UNIQUE constraints online.) Ensure that retry upon error reporting is supported. For a service model with continuous retries, the thread pool may be full and cannot provide external services. Therefore, before using this feature, evaluate the tolerance of waiting for locks in the service model and check whether slow SQL statements exist to determine whether this feature can be used.
  • During online DDL execution, do not execute DDL statements concurrently with other DDL statements. Otherwise, deadlocks or errors may occur. In this case, reconnect to the client and try again.
  • If DDL is upgraded from a version that does not support online DDL operations to a version that supports online DDL operations, online DDL operations are not supported during the observation period. By default, DDL operations are executed offline, and a notice is displayed, indicating that online DDL operations are not supported during the upgrade. (The preceding online DDL operations include changing the column type, precision, and length online, moving table-level tablespaces online, and adding NOT NULL, CHECK, PRIMARY KEY, and UNIQUE constraints online.)
  • A schema named online$$ddl$$[Hash value of (Table schema + Table name)] is generated during online DDL operations. (The online DDL operations include changing the column type, precision, and length online, moving table-level tablespaces online, and adding NOT NULL, CHECK, PRIMARY KEY, and UNIQUE constraints online.)
    • By default, the owner of the schema is the initial user. You are advised not to operate the schema during online DDL operations; otherwise, the online DDL operations may be terminated and an unknown error may occur. By default, this schema is not exported when gs_dump is used for data export.
    • Do not create a schema with the prefix online$$. Otherwise, an error message is displayed. If a schema with the same name already exists, online DDL operations do not take effect. By default, offline DDL operations are executed, and a message is displayed indicating that the current DDL statement cannot be online due to a schema name conflict.
  • If online DDL operations fail, the storage space of the table may bloat significantly. (The preceding online DDL operations include changing the column type, precision, and length online, and moving table-level tablespaces online.) Then table may bloat significantly if NOT NULL, CHECK, PRIMARY KEY, or UNIQUE constraints are added online no matter whether the operation fails or not. The bloat degree is proportional to the DDL statement execution period and the number of concurrent write operations. Do not perform online DDL operations when long transactions exist.
  • If online DDL operations include adding NOT NULL/CHECK/PRIMARY KEY/UNIQUE constraints, the error reported when data that violates the constraints may be different from that for offline DDL operations.
  • In an online DDL operation for adding a PRIMARY KEY or UNIQUE constraint, if the constraint contains an expression, the DDL operation is performed offline by default. If the ONLINE keyword exists, a notice is displayed, indicating that the DDL operation cannot be performed online.
  • Online DDL operations (including changing column types, precisions, and lengths, moving tablespaces at the table level, and adding NOT NULL, CHECK, PRIMARY KEY, and UNIQUE constraints) inherit the usage restrictions from CREATE TABLE.

Syntax

  • Change the definition of a table.
    ALTER TABLE [ OFFLINE | [ ONLINE [ WITH ( { online_parameter = value } [, ... ] ) ] ] ] [ IF EXISTS ]
        { table_name [*] | ONLY table_name | ONLY ( table_name ) }
        action [, ... ];
    ALTER TABLE [ OFFLINE | [ ONLINE [ WITH ( { online_parameter = value } [, ... ] ) ] ] ] [ IF EXISTS ] table_name
        ADD ( { column_name data_type [ compress_mode ] [ COLLATE collation ] [ column_constraint [ ... ] ] } [, ... ] );
    ALTER TABLE [ OFFLINE | [ ONLINE [ WITH ( { online_parameter = value } [, ... ] ) ] ] ] [ IF EXISTS ] table_name
        MODIFY ( { column_name data_type | column_name [ CONSTRAINT constraint_name ] NOT NULL [ ENABLE ] | column_name [ CONSTRAINT constraint_name ] NULL } [, ... ] );
    ALTER TABLE [ OFFLINE | [ ONLINE [ WITH ( { online_parameter = value } [, ... ] ) ] ] ] [ IF EXISTS ] table_name
        RENAME [ TO | AS | = ] new_table_name;
    RENAME { TABLE | TABLES } { table_name TO new_table_name } [, ... ];
    ALTER TABLE [ OFFLINE | [ ONLINE [ WITH ( { online_parameter = value } [, ... ] ) ] ] ] [ IF EXISTS ]
        { table_name [*] | ONLY table_name | ONLY ( table_name ) }
        RENAME [ COLUMN ] column_name TO new_column_name;
    ALTER TABLE [ OFFLINE | [ ONLINE [ WITH ( { online_parameter = value } [, ... ] ) ] ] ] [ IF EXISTS ]
        { table_name [*] | ONLY table_name | ONLY ( table_name ) }
        RENAME CONSTRAINT constraint_name TO new_constraint_name;
    ALTER TABLE [ OFFLINE | [ ONLINE [ WITH ( { online_parameter = value } [, ... ] ) ] ] ] [ IF EXISTS ] table_name
        SET SCHEMA new_schema;
    ALTER TABLE [ OFFLINE | [ ONLINE [ WITH ( { online_parameter = value } [, ... ] ) ] ] ] [ IF EXISTS ] table_name
        GSIWAITALL;
    The table operation action can be one of the following clauses:
    column_clause
        | ADD table_constraint [ NOT VALID ]
        | ADD table_constraint_using_index
        | VALIDATE CONSTRAINT constraint_name
        | DROP CONSTRAINT [ IF EXISTS ]  constraint_name [ RESTRICT | CASCADE ]
        | DROP PRIMARY KEY
        | DROP FOREIGN KEY fk_symbol
        | CLUSTER ON index_name
        | SET WITHOUT CLUSTER
        | SET ( {storage_parameter = value} [, ... ] )
        | RESET ( storage_parameter [, ... ] )
        | OWNER TO new_owner
        | SET TABLESPACE new_tablespace
        | TO { GROUP groupname | NODE ( nodename [, ... ] ) }
        | ADD NODE ( nodename [, ... ] )
        | DELETE NODE ( nodename [, ... ] )
        | UPDATE SLICE LIKE table_name
        | DISABLE TRIGGER { trigger_name | ALL | USER }
        | ENABLE TRIGGER { trigger_name | ALL | USER }
        | ENABLE REPLICA TRIGGER trigger_name
        | ENABLE ALWAYS TRIGGER trigger_name
        | ENABLE ROW LEVEL SECURITY
        | DISABLE ROW LEVEL SECURITY
        | FORCE ROW LEVEL SECURITY
        | NO FORCE ROW LEVEL SECURITY
        | REPLICA IDENTITY { DEFAULT | USING INDEX index_name | FULL | NOTHING }
        | AUTO_INCREMENT [ = ] value
        | COMMENT [ = ] 'string'
        | [ [ DEFAULT ] CHARACTER SET | CHARSET [ = ] default_charset ] [ [ DEFAULT ] COLLATE [ = ] default_collation ]
        | CONVERT TO CHARACTER SET | CHARSET charset [ COLLATE collation ]
        | ILM ADD POLICY ROW STORE COMPRESS ADVANCED ROW AFTER n { day | month | year } OF NO MODIFICATION [ ON ( EXPR )]
        | [ MODIFY { PARTITION partition_name | SUBPARTITION subpartition_name }] ILM { ENABLE | DISABLE | DELETE } POLICY policy_name
        | [ MODIFY { PARTITION partition_name | SUBPARTITION subpartition_name }] ILM { ENABLE_ALL | DISABLE_ALL | DELETE_ALL}
        | htap_action [ ( { column_name [, ... ]} ) ] [ PRIORITY { HIGH | LOW | NONE } ]
    • ADD table_constraint [ NOT VALID ]

      Adds a table constraint.

    • ADD table_constraint_using_index

      Adds a PRIMARY KEY constraint or UNIQUE constraint to a table based on the existing unique index.

    • VALIDATE CONSTRAINT constraint_name

      Validates a check-class constraint created with the NOT VALID option, and scans the entire table to ensure that all rows meet the constraint. Nothing happens if the constraint is already marked valid.

    • DROP CONSTRAINT [ IF EXISTS ] constraint_name [ RESTRICT | CASCADE ]

      Drops a table constraint.

    • DROP PRIMARY KEY

      Drops a primary key constraint from a table.

      This syntax is valid only when sql_compatibility is set to 'B'.

    • DROP FOREIGN KEY fk_symbol

      Drops foreign key constraints from a table. fk_symbol indicates the name of the foreign key constraint to be deleted. This syntax is valid only when sql_compatibility is set to 'B'.

    • CLUSTER ON index_name

      Selects the default index for future CLUSTER operations. Actually, the table is not re-clustered.

    • SET WITHOUT CLUSTER

      Deletes the most recently used CLUSTER index from the table. This affects future CLUSTER operations that do not specify an index.

    • SET ( {storage_parameter = value} [, ... ] )

      Changes one or more storage parameters for the table. If the value of table_name is an index name, ACTIVE_PAGES specifies the number of index pages, which may be less than the actual number of physical file pages and can be used for optimizer optimization. Currently, this parameter is valid only for the local index of the Ustore partitioned table and will be updated by VACUUM and ANALYZE (including AUTO VACUUM). You are advised not to manually set this parameter.

    • RESET ( storage_parameter [, ... ] )

      Resets one or more storage parameters to their defaults. As with SET, a table rewrite might be needed to update the table entirely.

    • OWNER TO new_owner

      Changes the owner of a table, sequence, or view to the specified user.

    • SET TABLESPACE new_tablespace

      Changes the table's tablespace to the specified tablespace and moves the data files associated with the table to the new tablespace. Indexes on the table, if any, are not moved; but they can be moved separately with additional SET TABLESPACE option in ALTER INDEX.

    • TO { GROUP groupname | NODE ( nodename [, ... ] ) }

      The syntax is only available in extended mode (when GUC parameter support_extended_features is on). Exercise caution when enabling the mode. It is mainly used for tools like internal dilatation tools. Common users should not use the mode.

    • ADD NODE ( nodename [, ... ] )

      It is only available for internal scale-out tools. Common users should not use the syntax.

    • DELETE NODE ( nodename [, ... ] )

      It is only available for internal scale-in tools. Common users should not use the syntax.

    • DISABLE TRIGGER { trigger_name | ALL | USER }

      Disables a single trigger specified by trigger_name, disables all triggers, or disables only user triggers (excluding internally generated constraint triggers, for example, deferrable UNIQUE constraint triggers and exclusion constraint triggers).

      Exercise caution when using this function because data integrity cannot be ensured as expected if the triggers are not executed.

    • | ENABLE TRIGGER { trigger_name | ALL | USER }

      Enables a single trigger specified by trigger_name, enables all triggers, or enables only user triggers.

    • | ENABLE REPLICA TRIGGER trigger_name

      Determines that the trigger firing mechanism is affected by the configuration variable session_replication_role. When the replication role is origin (default value) or local, a simple trigger is fired.

      When ENABLE REPLICA is configured for a trigger, it is triggered only when the session is in replica mode.

    • | ENABLE ALWAYS TRIGGER trigger_name

      Determines that all triggers are fired regardless of the current replica mode.

    • | { DISABLE | ENABLE } ROW LEVEL SECURITY

      Enables or disables row-level security for a table.

      If row-level security is enabled for a data table but no row-level security policy is defined, the row-level access to the data table is not affected. If row-level security for a table is disabled, the row-level access to the table is not affected even if a row-level security policy has been defined. For details, see CREATE ROW LEVEL SECURITY POLICY.

    • | { NO FORCE | FORCE } ROW LEVEL SECURITY

      Forcibly enables or disables row-level security for a table.

      By default, the table owner is not affected by the row-level security feature. However, if row-level security is forcibly enabled, the table owner (excluding system administrators) will be affected. System administrators are not affected by any row-level security policies.

    • REPLICA IDENTITY { DEFAULT | USING INDEX index_name | FULL | NOTHING | UNIQUE}

      Specifies the record level of old tuples in UPDATE and DELETE statements on a table in logical replication scenarios.

      • DEFAULT records the old value of the primary key column. If there is no primary key, DEFAULT does not record the old value.
      • USING INDEX records the old values of columns covered by the named indexes. These values must be unique, non-local, and non-deferrable, and contain the values of columns marked NOT NULL.
      • FULL records the old values of all columns in the row.
      • NOTHING does not record information in old rows.
      • UNIQUE records the old values of columns of the primary key or unique key.

      In logical replication scenarios, when the UPDATE and DELETE statements of a table are parsed, the parsed old tuples consist of the information recorded in this method. For tables with primary keys, this option can be set to DEFAULT or FULL. For a table without a primary key, set this parameter to FULL. Otherwise, the old tuple will be parsed as empty during decoding. You are advised not to set this parameter to NOTHING in common scenarios because old tuples are always parsed as empty.

      For Ustore tables, the NOTHING option is invalid, and the actual effect is the same as that of FULL. If DEFAULT does not have a primary key, all columns in the row are recorded.

    • AUTO_INCREMENT [ = ] value

      Sets the next auto-increment value of the auto-increment column. The configured value takes effect only when it is greater than the current auto-increment counter.

      The value must be a non-negative integer and cannot be greater than 2127 – 1.

      This clause takes effect only when sql_compatibility is set to B.

    • [ [ DEFAULT ] CHARACTER SET | CHARSET [ = ] default_charset ] [ [ DEFAULT ] COLLATE [ = ] default_collation ]

      Modifies the default character set and default collation of a table to the specified values. The modification does not affect the existing columns in the table.

    • CONVERT TO CHARACTER SET | CHARSET charset [ COLLATE collation ]

      Modifies the default character set and default collation of a table to the specified values, sets the character set and collation of all columns with character type to the specified value and converts the data in the column to new character set encoding.

    • ILM ADD POLICY ROW STORE COMPRESS ADVANCED ROW AFTER n { day | month | year } OF NO MODIFICATION [ ON ( EXPR )]

      Adds an ILM policy to a table. A complete ILM policy consists of two parts: ILM action and ILM condition. The ILM action is used to define a specific data compression or movement behavior. The ILM condition is used to define a condition for triggering the ILM action. The ILM condition is a row-level condition, that is, when the ILM condition applies to each row in the heap table and the current row is not modified within a period of time, the ILM condition is met, and the ILM action is triggered. EXPR supports only basic operation functions (such as to_date and substr) of table columns and types.

    • [ MODIFY { PARTITION partition_name | SUBPARTITION subpartition_name }] ILM { ENABLE | DISABLE | DELETE } POLICY policy_name

      Modifies a single ILM policy of a table (partition or subpartition). policy_name is the value of POLICY_NAME queried from the system view GS_ADM_ILMOBJECTS or GS_MY_ILMOBJECTS.

    • [ MODIFY { PARTITION partition_name | SUBPARTITION subpartition_name }] ILM { ENABLE_ALL | DISABLE_ALL | DELETE_ALL}

      Modifies all ILM policies of a table (partition or subpartition).

    • The column_clause can be one of the following clauses:
      ADD [ COLUMN ] [ IF NOT EXISTS ] column_name data_type [ CHARACTER SET | CHARSET charset ] [ compress_mode ] [ COLLATE collation ] [ column_constraint [ ... ] ] [ FIRST | AFTER column_name ]    
      | MODIFY column_name data_type
      | MODIFY column_name [ CONSTRAINT constraint_name ] NOT NULL [ ENABLE ]
      | MODIFY column_name [ CONSTRAINT constraint_name ] NULL
      | MODIFY [ COLUMN ] column_name data_type [ CHARACTER SET | CHARSET charset ] [{[ COLLATE collation ] | [ column_constraint ]} [ ... ] ] [ FIRST | AFTER column_name ]
      | CHANGE [ COLUMN ] column_name new_column_name data_type [ CHARACTER SET | CHARSET charset ] [{[ COLLATE collation ] | [ column_constraint ]} [ ... ] ] [ FIRST | AFTER column_name ]
      | DROP [ COLUMN ] [ IF EXISTS ] column_name [ RESTRICT | CASCADE ]
      | ALTER [ COLUMN ] column_name [ SET DATA ] TYPE data_type [ COLLATE collation ] [ USING expression ]
      | ALTER [ COLUMN ] column_name { SET DEFAULT expression | DROP DEFAULT }
      | ALTER [ COLUMN ] column_name { SET | DROP } NOT NULL
      | ALTER [ COLUMN ] column_name SET STATISTICS [ PERCENT ] integer
      | ADD STATISTICS (( column_1_name, column_2_name [, ...] ))
      | DELETE STATISTICS (( column_1_name, column_2_name [, ...] ))
      | DISABLE STATISTICS (( column_1_name, column_2_name [, ...] ))
      | ENABLE STATISTICS (( column_1_name, column_2_name [, ...] ))
      | ALTER [ COLUMN ] column_name SET ( {attribute_option = value} [, ... ] )
      | ALTER [ COLUMN ] column_name RESET ( attribute_option [, ... ] )
      | ALTER [ COLUMN ] column_name SET STORAGE { PLAIN | EXTERNAL | EXTENDED | MAIN }
      | MODIFY column_name DROP IDENTITY
      | MODIFY column_name GENERATED [ ALWAYS | BY DEFAULT [ ON NULL ] ] AS IDENTITY [ ( identity_options ) ]
      | ALTER [ COLUMN ] column_name ADD GENERATED [ ALWAYS | BY DEFAULT [ON NULL] ] AS IDENTITY [ [ SEQUENCE NAME sequence_name] | ( identity_options ) ]
      • ADD [ COLUMN ] [ IF NOT EXISTS ] column_name data_type [ CHARACTER SET | CHARSET charset ] [ compress_mode ] [ COLLATE collation ] [ column_constraint [ ... ] ] [ FIRST | AFTER column_name]

        Adds a column to a table. If you add a column using ADD COLUMN, all existing rows in the table are initialized to the column's default value (NULL if no DEFAULT value is specified). FIRST | AFTER column_name indicates that a column is added to a certain position. When IF NOT EXISTS is specified and columns with the same name exist, a notice is returned, indicating that the column already exists. When IF NOT EXISTS is not specified and columns with the same name exist, an error is returned.

      • ADD ( { column_name data_type [ compress_mode ] } [, ...] )

        Adds columns in the table.

      • MODIFY ( { column_name data_type | column_name [ CONSTRAINT constraint_name ] NOT NULL [ ENABLE ] | column_name [ CONSTRAINT constraint_name ] NULL } [, ...] )

        Modifies the data type of an existing column in the table. Running this command will clear the statistics of this column. You are advised to collect the statistics of this column again after the modification.

      • MODIFY [ COLUMN ] column_name data_type [ CHARACTER SET | CHARSET charset ] [{[ COLLATE collation ] | [ column_constraint ]} [ ... ] ] [FIRST | AFTER column_name]

        Replaces the definition of existing field in a table with a new definition. The indexes and independent object constraints, such as primary keys, unique key and CHECK constraints in the previous field are not deleted. The [FIRST | AFTER column_name] syntax indicates that the position of a column in a table is changed when the column definition is modified.

        This syntax can be used only when sql_compatibility is set to 'B'. Foreign tables are not supported. Encrypted columns cannot be modified. The data type and collation rule of partition key columns cannot be modified. The data type and collation rule of columns referenced by rules and materialized views cannot be modified.

        If a column whose data type or collation rule is modified is referenced by a generated column, the data in the generated column is regenerated.

        If the IDENTITY attribute is specified for a column that does not contain this attribute, the IDENTITY attribute is successfully added. If the IDENTITY attribute is specified for a column that already contains this attribute, an error is reported.

        If some objects (such as indexes, independent object constraints, views, triggers, and row-level security policies) depend on a modified column, these objects are rebuilt during column modification. If the definition of the modified column violates the constraints of this type of object, the modification fails. For example, the data type of the column that is used as the view result column cannot be modified. Pay attention on the failure impact before modification.

        If a modified column is called by some objects (such as user-defined functions and stored procedures), the modified column does not process these objects. After the columns are modified, these objects may be unavailable. Please evaluate the impact before modification.

        Changing the character set or collation of a column converts the data in the column to the new character set for encoding.

        The syntax of this clause is the same as that of MODIFY column_name data_type in the previous clause, but the semantic function is different. When the GUC parameter b_format_behavior_compat_options contains the enable_modify_column option, the function of this clause is used.

        Running this command will clear the statistics of this column. You are advised to collect the statistics of this column again after the modification.

        This command refreshes statistics or historical statistics. More statistics are recorded, indicates that the cost of this command is higher. Therefore, exercise caution when running this command in multi-partition and multi-historical statistics scenarios.

      • CHANGE [ COLUMN ] column_name new_column_name data_type [ CHARACTER SET | CHARSET charset ] [{[ COLLATE collation ] | [ column_constraint ]} [ ... ] ] [FIRST | AFTER column_name]

        Replaces the definition and name of existing column in a table with a new definition and name. The new column name must be different from the previous. The indexes and independent object constraints, such as primary keys, unique key and CHECK constraints in the previous column are not deleted. The [FIRST | AFTER column_name] syntax indicates that the position of a column in a table is changed when the name and definition of the column are modified.

        This syntax can be used only when sql_compatibility is set to 'B'. Foreign tables are not supported. Encrypted columns cannot be modified. The data type and collation rule of partition key columns cannot be modified. The data type and collation rule of columns referenced by rules and materialized views cannot be modified.

        If a column whose data type or collation rule is modified is referenced by a generated column, the data in the generated column is regenerated.

        If some objects (such as indexes, independent object constraints, views, triggers, and row-level security policies) depend on a modified column, these objects are rebuilt during column modification. If the definition of the modified column violates the constraints of this type of object, the modification fails. For example, the data type of the column that is used as the view result column cannot be modified. Pay attention on the failure impact before modification.

        If a modified column is called by some objects (such as user-defined functions and stored procedures), the modified column does not process these objects. After the column names are modified, these objects may be unavailable. Please evaluate the impact before modification.

        Changing the character set or collation of a column converts the data in the column to the new character set for encoding.

      • DROP [ COLUMN ] [ IF EXISTS ] column_name [ RESTRICT | CASCADE ]

        Drops a column from a table. Indexes and constraints related to the column are automatically dropped. If an object not belonging to the table depends on the column, CASCADE must be specified, such as a view.

        The DROP COLUMN statement does not physically remove the column, but simply makes it invisible to SQL operations. Subsequent INSERT and UPDATE operations on the table will store a NULL value for the column. Therefore, column deletion takes a short period of time but does not immediately release the tablespace on the disks, because the space occupied by the deleted column is not recycled. The space will be recycled when VACUUM is executed.

      • ALTER [ COLUMN ] column_name [ SET DATA ] TYPE data_type [ COLLATE collation ] [ USING expression ]

        Modifies the type of a column in a table. Indexes and simple table constraints on the column will automatically use the new data type by reparsing the originally supplied expression.

        If the original data type of a column and the modified data type are binary compatible, you do not need to rewrite the entire table when running this statement. In other scenarios, the entire table is rewritten. You can check whether the source type and target type are binary compatible in the PG_CAST system catalog. If castmethod is b, they are binary compatible. For example, if the data type of the source table is text and is converted to int, table rewriting is triggered. If it is converted to clob, table rewriting is not triggered. If table rewriting is triggered, the deleted space on the table is recycled immediately.

        Running this command will clear the statistics of this column. You are advised to collect the statistics of this column again after the modification.

      • ALTER [ COLUMN ] column_name { SET DEFAULT expression | DROP DEFAULT }

        Sets or removes the default value for a column. The default values only apply to subsequent INSERT operations; they do not cause rows already in the table to change. Defaults can also be created for views, in which case they are inserted into INSERT statements on the view before the view's ON INSERT rule is applied.

      • ALTER [ COLUMN ] column_name { SET | DROP } NOT NULL

        Alters whether a column is marked to allow NULL values or to reject NULL values. You can only use SET NOT NULL when the column contains no null values.

      • ALTER [ COLUMN ] column_name SET STATISTICS [PERCENT] integer

        Specifies the per-column statistics-collection target for subsequent ANALYZE operations. The target can be set in the range from 0 to 10000. Set it to –1 to revert to using the default system statistics target.

      • { ADD | DELETE } STATISTICS ((column_1_name, column_2_name [, ...]))

        Adds or deletes the declaration of collecting multi-column statistics to collect multi-column statistics as needed when ANALYZE is performed for a table or a database. If the GUC parameter enable_functional_dependency is disabled, the statistics about a maximum of 32 columns can be collected at a time. If the GUC parameter enable_functional_dependency is enabled, the statistics about a maximum of 4 columns can be collected at a time. You are not allowed to add or delete such declaration for system catalogs or foreign tables.

      • {DISABLE | ENABLE} STATISTICS ((column_1_name, column_2_name [, ...]))

        Sets the status of multi-column statistics. If multi-column statistics are disabled, related statistics are not collected, and the optimizer does not use the multi-column statistics during plan generation.

      • ALTER [ COLUMN ] column_name SET ( {attribute_option = value} [, ... ] )

        ALTER [ COLUMN ] column_name RESET ( attribute_option [, ... ] )

        Sets or resets per-attribute options.

        Currently, the only defined attribute options are n_distinct and n_distinct_inherited. n_distinct affects statistics of a table, while n_distinct_inherited affects the statistics of the table and its subtables. Currently, only SET/RESET n_distinct is supported, and SET/RESET n_distinct_inherited is forbidden.

      • ALTER [ COLUMN ] column_name SET STORAGE { PLAIN | EXTERNAL | EXTENDED | MAIN }

        Sets the storage mode for a column. It specifies whether this column is held inline or in an attached table, and whether the data should be compressed. SET STORAGE itself does not change anything in the table. It sets the strategy to be pursued during future table updates.

      • MODIFY column_name DROP IDENTITY

        Cancels an IDENTITY column. This operation deletes only the identity attribute and related implicit sequence of the column.

      • MODIFY column_name GENERATED [ ALWAYS | BY DEFAULT [ ON NULL ] ] AS IDENTITY [ ( identity_options ) ]

        Modifies the attributes and sequence parameters of the IDENTITY column. If the attributes of the IDENTITY column are not specified, the original attributes of the IDENTITY column are retained. The optional identity_options clause can be used to override sequence options.

      • ALTER [ COLUMN ] column_name ADD GENERATED [ ALWAYS | BY DEFAULT [ON NULL] ] AS IDENTITY [ [ SEQUENCE NAME sequence_name ] | ( identity_options ) ]

        Sets an existing column to an IDENTITY column. The column must be of the smallint, integer, bigint, decimal, numeric, float, double precision, or real numeric type (decimal, numeric, float, double precision, or real numeric type in A-compatible mode), and the default attribute is not set for the column. sequence_name specifies the name of the implicit sequence. By default, NOT NULL constraint is added to the IDENTITY column. If the column has NULL values, the modification fails.

      • column_constraint is as follows:
        [ CONSTRAINT constraint_name ]
            { NOT NULL |
              NULL |
              CHECK ( expression ) |
              DEFAULT default_expr |
              ON UPDATE update_expr |
              GENERATED ALWAYS AS ( generation_expr ) [STORED] |
              GENERATED [ ALWAYS | BY DEFAULT [ ON NULL ] ] AS IDENTITY [ ( identity_options ) ] |
              AUTO_INCREMENT |
              COMMENT 'string' |
              COLVIEW |
              NOCOLVIEW |
              UNIQUE [KEY] index_parameters |
              PRIMARY KEY index_parameters |
              ENCRYPTED WITH ( COLUMN_ENCRYPTION_KEY = column_encryption_key, ENCRYPTION_TYPE = encryption_type_value ) |
              REFERENCES reftable [ ( refcolumn ) ] [ MATCH FULL | MATCH PARTIAL | MATCH SIMPLE ]
                  [ ON DELETE action ] [ ON UPDATE action ] }
            [ DEFERRABLE | NOT DEFERRABLE | INITIALLY DEFERRED | INITIALLY IMMEDIATE ]
        • htap_action is as follows:
          COLVIEW
              | COLVIEW NOCOLVIEW
              | NOCOLVIEW
              | NOCOLVIEW COLVIEW
        • identity_options is as follows:
          [ INCREMENT [ BY ] increment ] [ MINVALUE minvalue | NO MINVALUE | NOMINVALUE] [ MAXVALUE maxvalue | NO MAXVALUE | NOMAXVALUE ] [ START { [ WITH ] start | WITH LIMIT VALUE } ]
          [ CACHE cache | NOCACHE ] [ [ NO ] CYCLE | NOCYCLE] [ SCALE [ EXTEND | NOEXTEND] | NOSCALE ]
        • update_expr is as follows:
          { CURRENT_TIMESTAMP | LOCALTIMESTAMP | NOW() }
      • compress_mode of a column is as follows:
        { DELTA | PREFIX | DICTIONARY | NUMSTR | NOCOMPRESS }
    • table_constraint_using_index used to add the PRIMARY KEY constraint or UNIQUE constraint based on the unique index is as follows:
      [ CONSTRAINT constraint_name ]
          { UNIQUE | PRIMARY KEY } USING INDEX index_name
          [ DEFERRABLE | NOT DEFERRABLE | INITIALLY DEFERRED | INITIALLY IMMEDIATE ]
    • table_constraint is as follows:
      [ CONSTRAINT [ constraint_name ] ]
          { CHECK ( expression ) |
            UNIQUE [ idx_name ] [ USING method ] ( { { column_name [ ( length ) ] | ( expression ) } [ ASC | DESC ] } [, ... ] ) index_parameters |
            PRIMARY KEY [ USING method ] ( { column_name [ ASC | DESC ] }[, ... ] ) index_parameters |
       |
            FOREIGN KEY [ idx_name ] ( column_name [, ... ] ) REFERENCES reftable [ ( refcolumn [, ... ] ) ]
              [ MATCH FULL | MATCH PARTIAL | MATCH SIMPLE ] [ ON DELETE action ] [ ON UPDATE action ] }
          [ DEFERRABLE | NOT DEFERRABLE | INITIALLY DEFERRED | INITIALLY IMMEDIATE ] [ NOT ENFORCED ]
          { [ COMMENT 'string' ] [ ... ] }

      The NOT ENFORCED syntax specifies that the created constraint is an informational constraint. That is, the database only records the constraint information and does not impose mandatory constraints on data. This syntax is mainly used to provide information required for optimization for the optimizer. Currently, only UNIQUE KEY, PRIMARY KEY, and FOREIGN KEY are supported.

      index_parameters is as follows:
      [ WITH ( {storage_parameter = value} [, ... ] ) ]
          [ USING INDEX TABLESPACE tablespace_name ]
  • Rename a table. The renaming does not affect stored data.
    ALTER TABLE [ IF EXISTS ] table_name
        RENAME [ TO | AS | = ] new_table_name;

    If you specify this parameter in a version 5.7 B-compatible database (sql_compatibility set to 'B', b_format_version set to '5.7', and b_format_dev_version set to 's2'), the following situations may occur:

    • If the character string corresponding to the new table name starts with "#mysql50#" and is followed by other characters, "#mysql50#" will be ignored.
    • If the old and new table names are the same, no error is reported.
  • Rename the specified column in the table.
    ALTER TABLE [ IF EXISTS ] { table_name [*] | ONLY table_name | ONLY ( table_name ) }
        RENAME [ COLUMN ] column_name TO new_column_name;
  • Rename the constraint of the table.
    ALTER TABLE [ IF EXISTS ] { table_name [*] | ONLY table_name | ONLY ( table_name ) }
        RENAME CONSTRAINT constraint_name TO new_constraint_name;
  • Set the schema of the table.
    ALTER TABLE [ IF EXISTS ] table_name
        SET SCHEMA new_schema;
    • The schema setting moves the table into another schema. Associated indexes and constraints owned by table columns are migrated as well. Currently, the schema for sequences cannot be changed. If the table has sequences, delete the sequences, and create them again or delete the ownership between the table and sequences. In this way, the table schema can be changed.
    • To change a schema of a table, you must also have the CREATE permission on the new schema. To add the table as a new child of a parent table, you must own the parent table as well. To change the owner, you must also be a direct or indirect member of the new owning role, and that role must have the CREATE permission on the table's schema. These restrictions enforce that the user can only rebuild and delete the table. However, a system administrator can alter the ownership of any table anyway.
    • All the actions except for RENAME and SET SCHEMA can be combined into a list of multiple alterations to apply in parallel. For example, it is possible to add several columns or alter the type of several columns in a single command. This is useful with large tables, since only one pass over the tables need be made.
    • Adding a CHECK or NOT NULL constraint will scan the table to validate that existing rows meet the constraint.
    • Adding a column with a non-NULL default or changing the type of an existing column will rewrite the entire table. Rewriting a large table may take much time and temporarily needs doubled disk space.
  • Add columns.
    ALTER TABLE [ IF EXISTS ] table_name
        ADD ( { column_name data_type [ compress_mode ] [ COLLATE collation ] [ column_constraint [ ... ] ]} [, ...] );
  • Update columns.
    ALTER TABLE [ IF EXISTS ] table_name 
        MODIFY ( { column_name data_type | column_name [ CONSTRAINT constraint_name ] NOT NULL [ ENABLE ] | column_name [ CONSTRAINT constraint_name ] NULL } [, ...] );
  • Commit all DML transactions in the table when the SQL statement is executed.
    1
    ALTER TABLE [ IF EXISTS ] table_name GSIWAITALL;
    

    This syntax is internally called by the CREATE GLOBAL INDEX CONCURRENTLY function to synchronize lock wait during online creation of global secondary indexes. The centralized system does not support distributed global secondary indexes. Therefore, this syntax is not supported.

Parameters

  • ONLINE | OFFLINE

    Controls whether to enable online DDL operations.

    The keywords ONLINE and OFFLINE are used.
    • ONLINE: DDL operations are executed online. In this case, if the ALTER TABLE statement contains a scenario that is not supported by the current DDL feature, the DDL operations are executed offline and a notice is displayed, indicating that the ONLINE keyword is ignored.
    • OFFLINE: DDL operations are executed offline.
      • Whether the data meets the data type conversion, length modification, or constraint requirements is checked during online DDL operations (including changing the column type, precision, and length, and adding NOT NULL, CHECK, PRIMARY KEY, and UNIQUE constraints). If the data does not meet the requirements, an error is reported during DDL operations. Online DDL operations support concurrent DML operations. Concurrent DML operations may delete data that does not meet conversion requirements or violates constraints, but an error may be still reported for the DDL operations.

        For example, when online DDL scans a table to change the column type, if the data of tuple A in the table does not meet data conversion requirements, an error is reported immediately when the online DDL transaction scans tuple A. At the same time, if tuple A is deleted by a parallel transaction DELETE after tuple A has been scanned by an online DDL operation, tuple A is not considered as deleted.

      • Online DDL error troubleshooting: The system automatically creates a temporary schema as the DDL implementation tool and sets the table to the append_mode state during online DDL operations (including changing the column type, precision, and length, and adding NOT NULL, CHECK, PRIMARY KEY, and UNIQUE constraints). If the online DDL execution is complete, the temporary mode is automatically cleared and restored. Automatic cleanup may fail due to reasons such as lock timeout. In this case, manual cleanup is required. If a severe fault occurs, such as FATAL, PANIC, database fault, or node disconnection, the created temporary schema and table appending status may remain. In this case, you need to manually clear and restore the temporary schema and table. The scenarios are as follows:
        • Canceling online DDL operations: If you cancel online DDL operations during the execution, the system automatically clears the data.
        • Operation error: If an error needs to be reported during the execution of a DDL command, for example, type conversion fails or length modification damages data, the system reports an error and automatically clears the data.
        • Major fault: After the cluster becomes normal, manually clear the data through online_ddl_cleanup().
        • Adding PRIMARY KEY and UNIQUE constraints online: If an exception (for example, manual cancellation, duplicate unique index key values, insufficient resources, thread startup failure, or lock timeout) occurs during online index creation, the index fails to be created online. In this case, residual indexes and temporary tables may occupy system resources. For details about how to handle the exception, see CONCURRENTLY.
      • You are advised not to call online_ddl_cleanup() during online DDL execution. Otherwise, the online DDL operations may be terminated or an unknown error may occur.
      • When the column char type extension is modified online, the underlying data is not modified and rewritten. After online DDL operations, the underlying storage of existing data still retains the old length, and the underlying storage of new and modified data caused by DML operations complies with the extended length.
  • online_parameter

    Specifies the online DDL option when the ONLINE keyword is specified. If the ONLINE keyword is not specified or online DDL conditions are not met, this parameter does not take effect.

    Value range:
    • parallel_threads: specifies the number of threads when parallelism is enabled. Value range: an integer ranging from 1 to 32. The default value is 1.
      • When parallel_threads is set to a value greater than 1, multiple threads are enabled and the number of threads is specified.
      • If parallel_threads is not set or is set to 1, multi-thread acceleration is disabled.

        When a user enables resource and workload management, for example, I/O resource management (for example, io_limits > 0 or io_priority = high/medium/low), parallel_threads ignores resource and workload management under the user or session. In addition, a notice is displayed, indicating that the current resource and workload management is invalid and multi-thread acceleration is enabled. The multi-thread execution process of online DDL operations actually follows the resource and workload management of the initial user.

      • When parallel_workers and parallel_threads are used together for acceleration, parallel_workers controls the maximum number of parallel threads in an index, and parallel_threads controls the maximum number of parallel threads between indexes.
    • max_catchup_times: specifies the maximum number of online DDL catchups to prevent long online DDL execution time. Value range: an integer ranging from 1 to 50. The default value is 50.
      • Online DDL execution records the DML incremental data generated during data reconstruction to a temporary table and performs multiple cyclic catchups to reconstruct the incremental data. If the amount of concurrent DML data is too large, online DDL catchup fails. To prevent this problem, you can set this parameter to limit the number of catchups.
  • IF EXISTS

    Sends a notice instead of an error if no tables have identical names. The notice prompts that the table you are querying does not exist.

  • table_name [*] | ONLY table_name | ONLY ( table_name )

    table_name is the name of the table that you need to change.

    If ONLY is specified, only the table is modified. If ONLY is not specified, the table and all subtables are modified. You can explicitly add the asterisk (*) option following the table name to specify that all subtables are scanned, which is the default operation.

  • constraint_name
    • Specifies the name of an existing constraint to drop in the DROP CONSTRAINT operation.
    • Specifies the name of a new constraint in the ADD CONSTRAINT operation.

      For a new constraint, constraint_name is optional in B-compatible mode (that is, sql_compatibility set to 'B'). For other modes, constraint_name is required.

  • index_name

    Specifies the index name.

    In the ADD CONSTRAINT operation:

    • The index_name is supported only in B-compatible databases (that is, sql_compatibility set to 'B').
    • For foreign key constraints, if constraint_name and index_name are specified at the same time, constraint_name is used as the index name.
    • For a unique key constraint, if both constraint_name and index_name are specified, index_name is used as the index name.
  • USING method

    Specifies the name of the index method to be used.

    For details about the value range, see USING method in Parameters.

    In the ADD CONSTRAINT operation:

    • The USING method is supported only in B-compatible databases (that is, sql_compatibility set to 'B').
    • In B-compatible mode, if USING method is not specified, the default index method is B-tree for Astore or UB-tree for Ustore.
    • If the storage mode of a table is Ustore and the constraint in the SQL statement is specified as USING BTREE, the underlying layer automatically creates the constraint as USING UBTREE.
  • ASC | DESC

    ASC specifies an ascending (default) sort order. DESC specifies a descending sort order.

    ADD CONSTRAINT supports ASC | DESC only in B-compatible databases (that is, sql_compatibility set to 'B') and M-compatible databases (that is, sql_compatibility set to 'M'). For details about the behavior of the M-compatible databases, see "SQL Reference > SQL Syntax > SQL Statements > A > ALTER TABLE" in M Compatibility Developer Guide.

  • expression

    Specifies an expression index constraint based on one or more columns of the table. It must be written in parentheses.

    Expression indexes in the UNIQUE constraint are supported only in B-compatible databases (that is, sql_compatibility set to 'B').

  • storage_parameter

    Specifies the name of a storage parameter.

    The following option is added for creating an index:

    • parallel_workers (int type)

      Number of bgworker threads started when an index is created. For example, value 2 indicates that two bgworker threads are started to create indexes concurrently.

      Value range: [0,32]. The value 0 indicates that parallel index creation is disabled.

      Default value: If this parameter is not set, the parallel index creation function is disabled.

    • hasuids (Boolean type)

      Default value: off

      If this parameter is set to on, a unique table-level ID is allocated to a tuple when the tuple is updated.

    • statistic_granularity

      Records the default partition_mode when the table analyzes statistics. For details about partition_mode, see Parameters. This parameter is invalid for non-partitioned tables.

      Value range: See the value range of partition_mode.

      Default value: AUTO

    The following options are added to online DDL operations:

    • online_ddl_status (enumerated type)

      This parameter is used to update the online DDL execution status of a target table.

      It is called in online DDL operations. You are advised not to use it.

      • on: enables the online DDL status.
      • off: disables the online DDL status.
      • init: initializes online DDL temporary objects.
      • catchup: performs catchup when column types, precisions, or lengths are changed online.
      • scan_catchup: performs catchup when NOT NULL or CHECK constraints are added online.
      • vacuum_catchup: performs catchup when VACUUM FULL is performed online.
      • vacuum_verbose_catchup: performs catchup when VACUUM FULL VERBOSE is performed online.
      • last_catchup: reports errors for the write service in the last online DDL catchup.
      • switch: exchanges temporary tables in the last online DDL catchup.
      • copy_switch: exchanges temporary tables in the last online tablespace movement catchup.
      • record_toast_oid: records the OID of a TOAST table attached to a target table of online DDL operations.
      • abort: specifies that online DDL operations enter the abort state.
      • flush_buffer: forcibly flushes buffers to disks when tablespaces are moved online.
      • tmp_rel: indicates a temporary table created during an online DDL process.
    • online_ddl_lock_tag (int type)

      Unique ID of a target table of online DDL operations, which is used to generate temporary objects. The ID is the hash value of (Table schema + Table name).

      It is called in online DDL operations. You are advised not to use it.

  • new_owner

    Specifies the name of the new table owner.

  • new_tablespace

    Specifies the new name of the tablespace to which the table belongs.

  • column_name, column_1_name, column_2_name

    Specifies the name of a new or existing column.

  • data_type

    Specifies the type of a new column or a new type of an existing column.

  • compress_mode

    Specifies whether to compress a table column. The clause specifies the compression algorithm preferentially used by the column. This parameter is available only to column-store compression.

  • charset

    Specifies the character set of a table column. If this parameter is specified separately, the collation of the table column is set to the default collation of the specified character set.

    This syntax is supported only in B-compatible databases (that is, sql_compatibility is set to 'B').

  • collation

    Specifies the collation rule name of a column. The optional COLLATE clause specifies a collation for the new column; if omitted, the collation is the default for the new column. You can run the select * from pg_collation; command to query collation rules from the pg_collation system catalog. The default collation rule is the row starting with default in the query result.

    In a B-compatible database (that is, sql_compatibility set to 'B'), the utf8mb4_bin, utf8mb4_general_ci, utf8mb4_unicode_ci, binary, gbk_chinese_ci, gbk_bin, gb18030_chinese_ci, and gb18030_bin collations are also supported. For details, see Table 1.

    • Only the character type supports the specified character set. If the binary character set or collation is specified, the character type is converted to the corresponding binary type. If the type mapping does not exist, an error is reported. Currently, only the mapping from the TEXT type to the BLOB type is available.
    • Except the binary character set and collation, only the character set that is the same as the database encoding can be specified.
    • If the character set or collation of a column is not explicitly specified and the default character set or collation of the table is specified, the character set or collation of the column is inherited from the table. If the default character set or collation of a table does not exist, the character set and collation of table columns inherit the character set and collation of the current database when b_format_behavior_compat_options contains 'default_collation'.
    • If the character set corresponding to the modified character set or collation is different from the character set of current column, the data in the column is converted to the specified character set for encoding.
  • USING expression

    Specifies how to compute the new column value from the old; if omitted, the default conversion is an assignment cast from old data type to new. A USING clause must be provided if there is no implicit or assignment cast from the old to new type.

    USING in ALTER TYPE can specify any expression involving the old values of the row; that is, it can refer to any columns other than the one being cast. This allows general casting to be done with the ALTER TYPE syntax. Because of this flexibility, the USING expression is not applied to the column's default value (if any); the result might not be a constant expression as required for a default. This means that when there is no implicit or assignment cast from old to new type, ALTER TYPE might fail to convert the default even though a USING clause is supplied. In such cases, drop the default with DROP DEFAULT, perform ALTER TYPE, and then use SET DEFAULT to add a suitable new default. Similar considerations apply to indexes and constraints involving the column.

  • NOT NULL | NULL

    Sets whether the column allows null values.

  • ENABLE

    Specifies that the constraint is enabled. By default, the constraint is enabled.

  • integer

    Specifies the constant value of a signed integer. When using PERCENT, the range of integer is from 0 to 100.

  • attribute_option

    Specifies an attribute option.

  • PLAIN | EXTERNAL | EXTENDED | MAIN

    Specifies a column-store mode.

    • PLAIN must be used for fixed-length values (such as integers). It must be inline and uncompressed.
    • MAIN is for inline, compressible data.
    • EXTERNAL is for external, uncompressed data. Use of EXTERNAL will make substring operations on text and bytea values run faster, at the penalty of increased storage space.
    • EXTENDED is for external, compressed data. EXTENDED is the default for most data types that support non-PLAIN storage.
  • CHECK ( expression )

    New rows or rows to be updated must satisfy for an expression to be true. If any row produces a false result, an error is raised and the database is not modified.

    A CHECK constraint specified as a column constraint should reference only the column's value, while an expression in a table constraint can reference multiple columns.

    Currently, CHECK ( expression ) does not include subqueries and cannot use variables apart from the current column.

  • DEFAULT default_expr

    Assigns a default data value to a column.

    The data type of the default expression must match the data type of the column.

    The default expression will be used in any insert operation that does not specify a value for the column. If there is no default value for a column, then the default value is NULL.

  • ON UPDATE update_expr

    The ON UPDATE clause is an attribute constraint of a column.

    When an UPDATE operation is performed on a tuple in a table, if new values of updated columns are different from old values in the table, column values with this attribute but not in updated columns are automatically updated to the current timestamp. If new values of updated columns are the same as old values in the table, column values with this attribute but not in updated columns remain unchanged. If columns with this attribute are in updated columns, column values are updated according to the specified update value.

    • This attribute can be specified in a B-compatible database (that is, sql_compatibility is set to 'B', b_format_version is set to '5.7', and b_format_dev_version is set to 's1') or a PG-compatible database (this syntax is not supported during upgrade observation).
    • In terms of syntax, update_expr supports three keywords: CURRENT_TIMESTAMP, LOCALTIMESTAMP, and NOW(). You can also specify or not specify the precision of a keyword with parentheses. The three types of keywords are synonyms of each other and have the same attribute effect. For example, ON UPDATE CURRENT_TIMESTAMP(), ON UPDATE CURRENT_TIMESTAMP(5), ON UPDATE LOCALTIMESTAMP(), and ON UPDATE LOCALTIMESTAMP(6). If the keyword does not contain parentheses or contains empty parentheses, the precision is 0. The NOW keyword cannot contain parentheses. In a PG-compatible database, ON UPDATE CURRENT_TIMESTAMP(), ON UPDATE LOCALTIMESTAMP(), and ON UPDATE NOW(5) are not supported.
    • This attribute can be specified only on columns of the following types: timestamp, datetime, date, time without time zone, smalldatetime, and abstime. A PG-compatible database does not support the datetime type.
    • The CREATE TABLE AS syntax does not inherit the column attributes.
    • The CREATE TABLE LIKE syntax can use INCLUDING UPDATE or EXCLUDING UPDATE to inherit or exclude a constraint. In a PG-compatible database, only EXCLUDING UPDATE can be used to exclude a constraint. The INCLUDING ILM option is added to copy the ILM policy information of a source table. This option is used together with the INCLUDING PARTITION option to copy the policy information of partition objects in the source table.
    • The precision specified by this attribute can be different from the precision specified by the type in the corresponding column. After the column value is updated through this attribute, the minimum precision is displayed. For example, ALTER TABLE t1 ADD col1 timestamp(6) ON UPDATE CURRENT_TIMESTAMP(3);. If the UPDATE syntax triggers the attribute to take effect, three decimal places in the value of col1 are displayed after the update.
    • The same column cannot be specified for this attribute and the generated column constraint at the same time.
    • This attribute cannot be specified for the partition key in a partitioned table.
  • COLUMN_ENCRYPTION_KEY = column_encryption_key

    Specifies the name of the CEK in the ENCRYPTED WITH constraint.

    Value range: a string that complies with the Identifier Naming Conventions.

  • ENCRYPTION_TYPE = encryption_type_value

    For the encryption type in the ENCRYPTED WITH constraint, the value of encryption_type_value is DETERMINISTIC or RANDOMIZED.

  • GENERATED ALWAYS AS ( generation_expr ) [STORED]

    This clause creates a column as a generated column. The value of the generated column is calculated by generation_expr when data is written (inserted or updated). STORED indicates that the value of the generated column is stored like an ordinary column.

    • The STORED keyword can be omitted, which has the same semantics as not omitting STORED.
    • The generation expression cannot refer to data other than the current row in any way. The generation expression cannot reference other generation columns or system columns. The generation expression cannot return a result set. No subquery, aggregate function, or window function can be used. The function called by the generation expression can only be an immutable function.
    • Default values cannot be specified for generated columns.
    • The generated column cannot be used as a part of the partition key.
    • Do not specify the generated column and the CASCADE, SET NULL, and SET DEFAULT actions of the ON UPDATE constraint clause at the same time. Do not specify the generated column and the SET NULL, and SET DEFAULT actions of the ON DELETE constraint clause at the same time.
    • The method of modifying and deleting generated columns is the same as that of ordinary columns. If you delete an ordinary column that a generated column depends on, the generated column is automatically deleted. The type of the column on which the generated column depends cannot be changed.
    • The generated column cannot be directly written. In the INSERT or UPDATE statement, values cannot be specified for generated columns, but the keyword DEFAULT can be specified.
    • The permission control for generated columns is the same as that for ordinary columns.
    • Columns cannot be generated. In foreign tables, only postgres_fdw supports generated columns.
  • GENERATED [ ALWAYS | BY DEFAULT [ ON NULL ] ] AS IDENTITY [ ( identity_options ) ]

    Creates a column as an IDENTITY column. An implicit sequence is automatically created based on identity_options and attached to a specified column. When data is inserted, the value obtained from the sequence is automatically assigned to the column.

    • GENERATED [ ALWAYS ] AS IDENTITY: Only identity values provided by the sequence generator can be inserted into this column. User-specified values cannot be inserted into this column.
    • GENERATED BY DEFAULT AS IDENTITY: User-specified values are preferentially inserted into this column. If no value is specified, the identity values provided by the sequence generator are inserted.
    • GENERATED BY DEFAULT ON NULL AS IDENTITY: User-specified values are preferentially inserted into this column. If a null value is specified or no value is specified, the identity values provided by the sequence generator are inserted.
    The optional identity_options clause can be used to override sequence options.
    • increment: specifies the implicit sequence step. If the value is a positive number, an ascending sequence is generated. If the value is a negative number, a descending sequence is generated. The default value is 1.
    • MINVALUE minvalue | NO MINVALUE | NOMINVALUE: specifies the minimum value of an execution sequence. If minvalue is not specified or NO MINVALUE is specified, the default value of an ascending sequence is 1, and the default value of a descending sequence is –1027 + 1. NOMINVALUE is equivalent to NO MINVALUE.
    • MAXVALUE maxvalue | NO MAXVALUE | NOMAXVALUE: specifies the maximum value of an execution sequence. If maxvalue is not specified or NO MAXVALUE is specified, the default value of an ascending sequence is 1028 – 1, and the default value of a descending sequence is –1. NOMAXVALUE is equivalent to NO MAXVALUE.
    • [ START { [ WITH ] start | WITH LIMIT VALUE } ]: start specifies the start value of the specified implicit sequence. Default value: minvalue for ascending sequences and maxvalue for descending sequences. If START WITH LIMIT VALUE is declared, the maximum (for ascending sequences) or minimum (for descending sequences) value in the table is specified as the start value of an implicit sequence.
    • cache: specifies the number of sequences stored in the memory for quick access purposes. The default value is 1, indicating that only one value can be generated at a time, that is, no cache is generated.
    • NOCACHE: No value of the sequence is stored in advance.
    • CYCLE: recycles sequences after the number of sequences reaches maxvalue or minvalue. If NO CYCLE is declared, any call to nextval returns an error after the sequence reaches its maximum or minimum value. NOCYCLE is equivalent to NO CYCLE. The default value is NO CYCLE.
    • SCALE: enables sequence scalability. If specified, a numeric offset is appended to the beginning of the sequence to prevent duplicate items in the generated value. If NOSCALE is declared, sequence scalability is disabled. The default value is NOSCALE.
    • EXTEND: extends the numeric offset length (default value: 6), aligns the sequence generation value to x (default value: 6) + y (maximum number of digits). SCALE must be specified when EXTEND is specified. If NOEXTEND is declared, the numeric offset length is not extended. The default value is NOEXTEND.
    • The IDENTITY column can only be of the smallint, integer, bigint, decimal, numeric, float, double precision, or real type.
    • In A-compatible mode, when an IDENTITY column of the integer type is created, the IDENTITY column is of the numeric type by default.
    • The method of changing an IDENTITY column is the same as that of changing an ordinary column, but the type can be changed only to smallint, integer, bigint, decimal, numeric, float, double precision, or real.
    • By default, the IDENTITY column has the NOT NULL constraint.
    • A table can contain only one IDENTITY column.
    • The method of deleting an IDENTITY column is the same as that of deleting an ordinary column. When a column is deleted, the implicit sequence of the IDENTITY column is automatically deleted.
    • The IDENTITY column cannot be specified together with the SET DEFAULT action.
    • The type of the implicit sequence that is automatically created is LARGE SEQUENCE.
    • You cannot use DROP LARGE SEQUENCE or ALTER LARGE SEQUENCE to modify the implicit sequence of an identity.
    • The [ SCALE [ EXTEND | NOEXTED ] | NOSCALE ] clause is available only when an IDENTITY column is created in a centralized system in A-compatible mode.
    • In a fully-encrypted database, the encrypted IDENTITY column cannot be specified during table creation.
  • AUTO_INCREMENT

    Specifies an auto-increment column.

    For details, see AUTO_INCREMENT.

  • COMMENT [ = ] 'string'
    • The COMMENT [ = ] 'string' clause is used to add comments to a table.
    • The COMMENT 'string' in column_constraint indicates that comments are added to a column.
    • The COMMENT 'string' in table_constraint indicates that comments are added to the indexes corresponding to the primary key and unique key.

    For details, see COMMENT[=]'string'.

  • UNIQUE [KEY] index_parameters

    UNIQUE KEY can be used only when sql_compatibility is set to 'B', which has the same semantics as UNIQUE.

    Specifies that a group of one or more columns of a table can contain only unique values.

  • UNIQUE [ index_name ][ USING method ]( {{ column_name [ ( length ) ] | ( expression ) } [ ASC | DESC ] }[, ... ] ) index_parameters

    Specifies that a group of one or more columns of a table can contain only unique values.

    column_name (length) is the prefix key. For details, see column_name(length).

    index_name indicates the index name.

    • The index_name is supported only in B-compatible databases (that is, sql_compatibility set to 'B').
    • For a unique key constraint, if both constraint_name and index_name are specified, index_name is used as the index name.
  • PRIMARY KEY index_parameters

    PRIMARY KEY [ USING method ] ( { column_name [ ASC | DESC ] } [, ... ] ) index_parameters [NOT ENFORCED]

    Specifies that a column or columns of a table can contain only unique (non-duplicate) and non-NULL values.

  • REFERENCES reftable [ ( refcolumn ) ] [ MATCH matchtype ] [ ON DELETE action ] [ ON UPDATE action ] (column constraint)

    FOREIGN KEY ( column_name [, ... ] ) REFERENCES reftable [ ( refcolumn [, ... ] ) ] [ MATCH matchtype ] [ ON DELETE action ] [ ON UPDATE action ] (table constraint) [NOT ENFORCED]

    The foreign key constraint requires that the group consisting of one or more columns in the new table should contain and match only the referenced column values in the referenced table. If refcolumn is omitted, the primary key of reftable is used. The referenced column should be the only column or primary key in the referenced table. A foreign key constraint cannot be defined between a temporary table and a permanent table.

    There are three types of matching between a reference column and a referenced column:

    • MATCH FULL: A column with multiple foreign keys cannot be NULL unless all foreign key columns are NULL.
    • MATCH SIMPLE (default): Any unexpected foreign key column can be NULL.
    • MATCH PARTIAL: This option is not supported currently.

    In addition, when certain operations are performed on the data in the referenced table, the operations are performed on the corresponding columns in the new table. ON DELETE: specifies the operations to be executed after a referenced row in the referenced table is deleted. ON UPDATE: specifies the operation to be performed when the referenced column data in the referenced table is updated. Possible responses to the ON DELETE and ON UPDATE clauses are as follows:

    • NO ACTION (default): When a foreign key is deleted or updated, an error indicating that the foreign key constraint is violated is created. If the constraint is deferrable and there are still any referenced rows, this error will occur when the constraint is checked.
    • RESTRICT: When a foreign key is deleted or updated, an error indicating that the foreign key constraint is violated is created. It is the same as NO ACTION except that the constraint is not deferrable.
    • CASCADE: deletes any row that references the deleted row from the new table, or update the column value of the referenced row in the new table to the new value of the referenced column.
    • SET NULL: sets the referenced columns to NULL.
    • SET DEFAULT: sets the referenced columns to their default values.
  • DEFERRABLE | NOT DEFERRABLE | INITIALLY DEFERRED | INITIALLY IMMEDIATE

    Sets whether the constraint can be deferrable.

    • DEFERRABLE: deferrable to the end of the transaction and checked using SET CONSTRAINTS.
    • NOT DEFERRABLE: checks immediately after the execution of each command.
    • INITIALLY IMMEDIATE: checks immediately after the execution of each statement.
    • INITIALLY DEFERRED: checks when the transaction ends.
  • WITH ( {storage_parameter = value} [, ... ] )

    Specifies an optional storage parameter for a table or an index.

  • tablespace_name

    Specifies the name of the tablespace where the index locates.

  • COMPRESS|NOCOMPRESS
    • NOCOMPRESS: If the NOCOMPRESS keyword is specified, the existing compression feature of the table will not be changed.
    • COMPRESS: If the COMPRESS keyword is specified, the table compression feature will be triggered by batch tuple insertion. This parameter is available only to column-store compression.
  • new_table_name

    Specifies the new table name.

  • new_column_name

    Specifies the new name of a specific column in a table.

  • new_constraint_name

    Specifies the new name of a table constraint.

  • new_schema

    Specifies the new schema name.

  • CASCADE

    Automatically drops objects that depend on the dropped column or constraint (for example, views referencing the column).

  • RESTRICT
    Refuses to drop the column if the column is referenced by other columns or constraints. RESTRICT is the default option. If CASCADE is not specified, the value is RESTRICT. An example of the statement is as follows:
    ALTER TABLE table_name [DROP [column] col_name [CASCADE | RESTRICT]];
    table_name indicates the table name, and col_name indicates the column name.
  • FIRST

    Adds or changes the column to the first place.

  • AFTER column_name

    Adds or changes the column after the column specified by column_name.

    • This syntax is supported only in B-compatible databases (that is, sql_compatibility is set to 'B').
    • FIRST | AFTER column_name cannot be used in the encrypted column.
    • The position of a column in a table that depends on rules cannot be changed (including the addition and change of the column position).
    • FIRST | AFTER column_name cannot be used in the foreign table.
    • Columns of the SET type cannot be changed to the specified position.
  • schema_name

    Specifies the schema name of a table.

  • IF NOT EXISTS

    When IF NOT EXISTS is specified and columns with the same name exist, a notice is returned, indicating that the column already exists. When IF NOT EXISTS is not specified and columns with the same name exist, an error is returned.

  • [DEFAULT] CHARACTER SET | CHARSET [ = ] default_charset

    Changes the default character set of the table. If this parameter is specified separately, the default collation of the table is set to the default collation of the specified character set.

    This syntax is supported only when sql_compatibility is set to 'B'.

  • [DEFAULT] COLLATE [ = ] default_collation

    Changes the default collation of the table. If this parameter is specified separately, the default character set of the table is set to the character set corresponding to the specified collation.

    This syntax is supported only when sql_compatibility is set to 'B'. For details about the collation, see Table 1.

    If the character set or collation of a table is not explicitly specified and the default character set or collation of the schema is specified, the character set or collation of the table is inherited from the schema. If the default character set or collation of a schema does not exist, the character set and collation of the table inherit those of the current database when b_format_behavior_compat_options contains 'default_collation'.

  • COLVIEW | NOCOLVIEW

    Specifies whether the declaration table is an IMCV table or a non-IMCV table. For details, see section "Hybrid Transactional/Analytical Processing" in Feature Guide.

Examples of Changing a Table

  • Rename a table.
    gaussdb=# CREATE TABLE aa(c1 int, c2 int);
    gaussdb=# ALTER TABLE IF EXISTS aa RENAME TO test_alt1;
  • Change the schema of a table.
    -- Create the test_schema schema.
    gaussdb=# CREATE SCHEMA test_schema;
    
    -- Change the schema of the test_alt1 table to test_schema.
    gaussdb=# ALTER TABLE test_alt1 SET SCHEMA test_schema;
    
    -- Query table information.
    gaussdb=# SELECT schemaname,tablename FROM pg_tables WHERE tablename = 'test_alt1';
     schemaname  | tablename 
    -------------+-----------
     test_schema | test_alt1
    (1 row)
  • Change the owner of a table.
    -- Create user test_user.
    gaussdb=# CREATE USER test_user PASSWORD '********';
    
    -- Change the owner of the test_alt1 table to test_user.
    gaussdb=# ALTER TABLE IF EXISTS test_schema.test_alt1 OWNER TO test_user;
    
    -- Query.
    gaussdb=# SELECT tablename, schemaname, tableowner FROM pg_tables WHERE tablename = 'test_alt1';
     tablename | schemaname  | tableowner 
    -----------+-------------+------------
     test_alt1 | test_schema | test_user
    (1 row)
  • Change the tablespace of a table.
    -- Create the tbs_data1 tablespace.
    gaussdb=# CREATE TABLESPACE tbs_data1 RELATIVE LOCATION 'tablespace1/tbs_data1';
    
    -- Change the tablespace of the test_alt1 table to tbs_data1.
    gaussdb=# ALTER TABLE test_schema.test_alt1 SET TABLESPACE tbs_data1;
    
    -- Query.
    gaussdb=# SELECT tablename, tablespace FROM pg_tables WHERE tablename = 'test_alt1';
     tablename | tablespace 
    -----------+------------
     test_alt1 | tbs_data1
    (1 row)
    
    -- Create a table.
    gaussdb=# CREATE TABLE test(c1 int, c2 int);
    -- Change the space of the test table to tbs_data1 online.
    gaussdb=# ALTER TABLE ONLINE test SET TABLESPACE tbs_data1;
    
    -- Query.
    gaussdb=# SELECT tablename, tablespace FROM pg_tables WHERE tablename = 'test';
     tablename | tablespace 
    -----------+------------
     test| tbs_data1
    (1 row)
    
    -- Drop.
    gaussdb=# DROP TABLE test ; 
    gaussdb=# DROP TABLE test_schema.test_alt1; 
    gaussdb=# DROP TABLESPACE tbs_data1; 
    gaussdb=# DROP SCHEMA test_schema; 
    gaussdb=# DROP USER test_user;

Examples of Changing a Column

  • Change column names.
    -- Create a table.
    gaussdb=# CREATE TABLE test_alt2(c1 INT,c2 INT);
    
    -- Change column names.
    gaussdb=# ALTER TABLE test_alt2 RENAME c1 TO id;
    gaussdb=# ALTER TABLE test_alt2 RENAME COLUMN c2 to areaid; 
    
    -- Query.
    gaussdb=# \d test_alt2
       Table "public.test_alt2"
     Column |  Type   | Modifiers 
    --------+---------+-----------
     id     | integer | 
     areaid | integer | 
  • Add columns.
    -- Add a column to the test_alt2 table.
    gaussdb=# ALTER TABLE IF EXISTS test_alt2 ADD COLUMN name VARCHAR(20);
    
    -- Query.
    gaussdb=# \d test_alt2
               Table "public.test_alt2"
      Column  |         Type          | Modifiers 
    ----------+-----------------------+-----------
     id       | integer               | 
     areaid   | integer               | 
     name     | character varying(20) |
  • Add the AUTO_INCREMENT column.
    -- Create a table and add an auto-increment column.
    gaussdb=# CREATE DATABASE test DBCOMPATIBILITY = 'B';
    gaussdb=# \c test
    test=# CREATE TABLE test_autoinc(col1 int);
    
    -- Insert a data record.
    test=# INSERT INTO test_autoinc(col1) VALUES(1);
    
    -- Add a local auto-increment column, which starts from 1.
    test=# ALTER TABLE test_autoinc ADD COLUMN col int primary key AUTO_INCREMENT;
    test=# SELECT col,col1 FROM test_autoinc ORDER BY 2,1;
     col | col1
    -----+------
       1 |    1
    (1 row)
    
    -- Set the next auto-increment value to 10.
    test=# ALTER TABLE test_autoinc AUTO_INCREMENT = 10;
    
    -- Enter NULL to trigger auto-increment. The auto-increment value is 10.
    test=# INSERT INTO test_autoinc(col, col1) VALUES(NULL,2);
    
    -- Enter 0 to trigger auto-increment. The auto-increment value is 11.
    test=# INSERT INTO test_autoinc(col, col1) VALUES(0,3);
    
    test=# SELECT col,col1 FROM test_autoinc ORDER BY 2,1;
     col | col1
    -----+------
       1 |    1
      10 |    2
      11 |    3
    (3 rows)
    
    -- Drop.
    test=# DROP TABLE test_autoinc;
    
    -- Switch to the default database. Change the database name based on actual situation.
    test=# \c postgres;
    gaussdb=# DROP DATABASE test;
  • Change the data type of a column.
    -- Change the type of the name column in the test_alt2 table.
    gaussdb=# ALTER TABLE test_alt2 MODIFY name VARCHAR(50);
    
    -- Query.
    gaussdb=# \d test_alt2
              Table "public.test_alt2"
     Column |         Type          | Modifiers 
    --------+-----------------------+-----------
     id     | integer               | 
     areaid | integer               | 
     name   | character varying(50) | 
    
    -- Change the type of the name column in the test_alt2 table online.
    gaussdb=# ALTER TABLE ONLINE test_alt2 MODIFY name VARCHAR(60);
    
    -- Query.
    gaussdb=# \d test_alt2
              Table "public.test_alt2"
     Column |         Type          | Modifiers 
    --------+-----------------------+-----------
     id     | integer               | 
     areaid | integer               | 
     name   | character varying(60) | 
    -- Change the type of the name column in the test_alt2 table.
    gaussdb=# ALTER TABLE test_alt2 ALTER COLUMN name TYPE VARCHAR(25);
    
    -- Query.
    gaussdb=# \d test_alt2
              Table "public.test_alt2"
     Column |         Type          | Modifiers 
    --------+-----------------------+-----------
     id     | integer               | 
     areaid | integer               | 
     name   | character varying(25) | 
    
    -- Change the type of the name column in the test_alt2 table online.
    gaussdb=# ALTER TABLE ONLINE test_alt2 ALTER COLUMN name TYPE VARCHAR(35);
    
    -- Query.
    gaussdb=# \d test_alt2
              Table "public.test_alt2"
     Column |         Type          | Modifiers 
    --------+-----------------------+-----------
     id     | integer               | 
     areaid | integer               | 
     name   | character varying(35) | 
  • Drop a column.
    -- Drop the areaid column from test_alt2.
    gaussdb=# ALTER TABLE test_alt2 DROP COLUMN areaid;
    
    -- Query.
    gaussdb=# \d test_alt2
              Table "public.test_alt2"
     Column |         Type          | Modifiers 
    --------+-----------------------+-----------
     id     | integer               | 
     name   | character varying(25) | 
  • Change the column-store mode.
    -- View table details.
    gaussdb=# \d+ test_alt2
                                  Table "public.test_alt2"
     Column |         Type          | Modifiers | Storage  | Stats target | Description 
    --------+-----------------------+-----------+----------+--------------+-------------
     id     | integer               |           | plain    |              | 
     name   | character varying(25) |           | extended |              | 
    Has OIDs: no
    Options: orientation=row, compression=no, storage_type=USTORE, segment=off
    
    -- Change the storage mode of the name column in the test_alt2 table.
    gaussdb=# ALTER TABLE test_alt2 ALTER COLUMN name SET STORAGE PLAIN;
    
    -- Query.
    gaussdb=# \d+ test_alt2
                                 Table "public.test_alt2"
     Column |         Type          | Modifiers | Storage | Stats target | Description 
    --------+-----------------------+-----------+---------+--------------+-------------
     id     | integer               |           | plain   |              | 
     name   | character varying(25) |           | plain   |              | 
    Has OIDs: no
    Options: orientation=row, compression=no, storage_type=USTORE, segment=off
    
    -- Drop.
    gaussdb=# DROP TABLE test_alt2;
  • Move a column to a specified position.
    -- Create a B-compatible database.
    gaussdb=# CREATE DATABASE test DBCOMPATIBILITY 'B';
    
    -- Connect to the test database and create the tbl_test table.
    gaussdb=# \c test
    test=# CREATE TABLE tbl_test(id int, name varchar(20));
    
    -- Change the type of the name column in the tbl_test table and move it to the beginning.
    test=# ALTER TABLE tbl_test MODIFY COLUMN name varchar(25) FIRST;
    
    -- Query.
    test=# \d tbl_test;
              Table "public.tbl_test"
     Column |         Type          | Modifiers 
    --------+-----------------------+-----------
     name   | character varying(25) | 
     id     | integer               | 
    
    -- Change the type of the name column in the tbl_test table and move it after the id column.
    test=# ALTER TABLE tbl_test MODIFY COLUMN name varchar(10) AFTER id;
    
    -- Query.
    test=# \d tbl_test;
              Table "public.tbl_test"
     Column |         Type          | Modifiers 
    --------+-----------------------+-----------
     id     | integer               | 
     name   | character varying(10) | 
    
    -- Drop the tbl_test table.
    test=# DROP TABLE tbl_test;
    
    -- Switch to the default database and delete the test database. (Switch to the corresponding database as required.)
    test=# \c postgres
    gaussdb=# DROP DATABASE test;

Examples of Changing a Constraint

  • Add a NOT NULL constraint to a column.
    -- Create a table.
    gaussdb=# CREATE TABLE test_alt3(pid INT, areaid CHAR(5), name VARCHAR(20));
    
    Add a NOT NULL constraint to pid.
    gaussdb=# ALTER TABLE test_alt3 MODIFY pid NOT NULL;
    
    -- Query.
    gaussdb=# \d test_alt3
              Table "public.test_alt3"
     Column |         Type          | Modifiers 
    --------+-----------------------+-----------
     pid    | integer               | not null
     areaid | character(5)          | 
     name   | character varying(20) |
    
    -- Cancel the NOT NULL constraint on a column.
    gaussdb=# ALTER TABLE test_alt3 MODIFY pid NULL;
    
    -- Add a NOT NULL constraint to the PID online.
    gaussdb=# ALTER TABLE ONLINE test_alt3 MODIFY pid NOT NULL;
    
    -- Query.
    gaussdb=# \d test_alt3
              Table "public.test_alt3"
     Column |         Type          | Modifiers 
    --------+-----------------------+-----------
     pid    | integer               | not null
     areaid | character(5)          | 
     name   | character varying(20) |
  • Cancel the NOT NULL constraint on a column.
    gaussdb=# ALTER TABLE test_alt3 MODIFY pid NULL;
    -- Query.
    gaussdb=# \d test_alt3
              Table "public.test_alt3"
     Column |         Type          | Modifiers 
    --------+-----------------------+-----------
     pid    | integer               | 
     areaid | character(5)          | 
     name   | character varying(20) | 
  • Change the default value of a column.
    -- Change the default value of id in the test_alt3 table.
    gaussdb=# ALTER TABLE test_alt3 ALTER COLUMN areaid SET DEFAULT '00000';
    
    -- Query.
    gaussdb=# \d test_alt3
                     Table "public.test_alt3"
     Column |         Type          |        Modifiers        
    --------+-----------------------+-------------------------
     pid    | integer               | 
     areaid | character(5)          | default '00000'::bpchar
     name   | character varying(20) |
    -- Drop the default value of id.
    gaussdb=# ALTER TABLE test_alt3 ALTER COLUMN areaid DROP DEFAULT;
    
    -- Query.
    gaussdb=# \d test_alt3
              Table "public.test_alt3"
     Column |         Type          | Modifiers 
    --------+-----------------------+-----------
     pid    | integer               | 
     areaid | character(5)          | 
     name   | character varying(20) |
  • Add a table-level constraint.
    • Directly add a constraint.
      -- Add a primary key constraint to the table.
      gaussdb=# ALTER TABLE test_alt3 ADD CONSTRAINT pk_test3_pid PRIMARY KEY (pid);
      
      -- Query.
      gaussdb=# \d test_alt3
                Table "public.test_alt3"
       Column |         Type          | Modifiers 
      --------+-----------------------+-----------
       pid    | integer               | not null
       areaid | character(5)          | 
       name   | character varying(20) | 
      Indexes:
          "pk_test3_pid" PRIMARY KEY, ubtree (pid) WITH (storage_type=USTORE) TABLESPACE pg_default
      
      -- Drop the table.
      gaussdb=# DROP TABLE IF EXISTS test_alt3;
      
      -- Create a table.
      gaussdb=# CREATE TABLE test_alt3(pid INT, areaid CHAR(5), name VARCHAR(20));
      
      -- Add a primary key constraint to the table online.
      gaussdb=# ALTER TABLE ONLINE test_alt3 ADD CONSTRAINT pk_test3_pid PRIMARY KEY (pid);
      
      -- Query.
      gaussdb=# \d test_alt3
                Table "public.test_alt3"
       Column |         Type          | Modifiers 
      --------+-----------------------+-----------
       pid    | integer               | not null
       areaid | character(5)          | 
       name   | character varying(20) | 
      Indexes:
          "pk_test3_pid" PRIMARY KEY, ubtree (pid) WITH (storage_type=USTORE) TABLESPACE pg_default
    • Create an index and then add constraints.
      -- Create a table.
      gaussdb=# CREATE TABLE test_alt4(c1 INT, c2 INT);
      
      -- Create an index.
      gaussdb=# CREATE UNIQUE INDEX pk_test4_c1 ON test_alt4(c1);
      
      -- Associate the created index when adding a constraint.
      gaussdb=# ALTER TABLE test_alt4 ADD CONSTRAINT pk_test4_c1 PRIMARY KEY USING INDEX pk_test4_c1;
      
      -- Query.
      gaussdb=# \d test_alt4
         Table "public.test_alt4"
       Column |  Type   | Modifiers 
      --------+---------+-----------
       c1     | integer | not null
       c2     | integer | 
      Indexes:
          "pk_test4_c1" PRIMARY KEY, ubtree (c1) WITH (storage_type=USTORE) TABLESPACE pg_default
      
      -- Drop.
      gaussdb=# DROP TABLE test_alt4;
  • Delete a table-level constraint.
    -- Drop a constraint.
    gaussdb=# ALTER TABLE test_alt3 DROP CONSTRAINT IF EXISTS pk_test3_pid;
    
    -- Query.
    gaussdb=# \d test_alt3
              Table "public.test_alt3"
     Column |         Type          | Modifiers 
    --------+-----------------------+-----------
     pid    | integer               | not null
     areaid | character(5)          | 
     name   | character varying(20) | 
    
    -- Drop.
    gaussdb=# DROP TABLE test_alt3;
  • Rename a constraint.
    -- Create the tbl_test1 table and specify the primary key constraint.
    gaussdb=# CREATE TABLE tbl_test1(
        id int,
        name varchar(50),
        CONSTRAINT aaa PRIMARY KEY (id)
    );
    
    -- Rename the constraint.
    gaussdb=# ALTER TABLE tbl_test1 RENAME CONSTRAINT aaa TO pk_test1_id;
    
    -- View the information.
    gaussdb=# \d tbl_test1;
              Table "public.tbl_test1"
     Column |         Type          | Modifiers 
    --------+-----------------------+-----------
     id     | integer               | not null
     name   | character varying(50) | 
    Indexes:
        "pk_test1_id" PRIMARY KEY, ubtree (id) WITH (storage_type=USTORE) TABLESPACE pg_default
    
    -- Drop the table.
    gaussdb=# DROP TABLE tbl_test1;

Helpful Links

CREATE TABLE and DROP TABLE