Updated on 2025-07-22 GMT+08:00

New Features in 9.1.0.x

The beta features discussed below are not available for commercial use. Search for technical support before utilizing these features.

Patch 9.1.0.218 (May 2025)

This is a patch version that fixes known issues.

Table 1 New features/Resolved issues in patch 9.1.0.218

Category

Feature or Resolved Issue

Cause

Version

Handling Method

New features

None

-

-

-

Resolved issues

There are write locks in high-concurrency scenarios once LLVM is enabled.

After LLVM is enabled, an operating system (OS) will reclaim memory when an application repeatedly allocates and releases memory in high-concurrency scenarios. During the memory reclamation, holding the mmap write lock can indeed block other threads from allocating new memory, affecting memory access performance.

Versions earlier than 9.1.0.218

Upgrade the version to 9.1.0.218 or later.

Hstore and time series tables are no longer used.

The time series tables, column-store delta tables, and historical HStore tables are no longer used in clusters of version 9.1.0.218 or later. They are replaced by the HStore Opt table.

-

The error "invalid memory alloc reuqest size" is displayed during presort execution

During the runtime filter presorting, the ID of another column is incorrectly obtained. As a result, the obtained attlen is incorrect, and the function for reading the data length is incorrect.

When attlen is set to -1, the system incorrectly uses the single-byte processing function to read the Chinese characters encoded in GBK. As a result, a negative value for character length is calculated, and memory fails to be allocated.

9.1.0.210

After enable_hstore_binlog_table is enabled and services are running for a long time, the pg_csnlog file on the standby node is stacked. After the file is cleared, the space is still not reclaimed.

To prevent unwanted recycling of CSN logs, at least one VACUUM operation is required. During this process, pg_binlog_slots on the primary node collects oldestxmin, but the standby node does not execute the collection. As a result, oldestxmin for reclaiming CSN logs is always 0 and CKP does not reclaim CSN logs.

9.1.0.201

The memory estimated by ANALYZE deviates significantly from the actual memory, so CCN is queued abnormally.

If the defined column width exceeds 1024 and the stored data is short or empty, the estimated memory width using ANALYZE is large.

8.1.3

When UPSERT operations are executed in batches, the binlogs on the source and target tables are not synchronized.

When UPSERT operations are performed on all columns, the old binary log is deleted and the new binary log is not recorded. The binlogs on the source and target tables are not synchronized.

Versions earlier than 9.1.0.218

Fixed the issue where after a switchover is performed, the values of autovacuum and autoanalyze of the original primary cluster are disabled.

The autovacuum parameter of the primary cluster is originally on. After a switchover is performed, the value is changed to off. After the switchover is performed again, the values of autovacuum and autoanaly are not as expected.

9.1.0

Fixed the issue where the value of enable_orc_cache is automatically modified in upgrades.

After a cluster is upgraded to 9.1.0.1 or later, the value of enable_orc_cache is changed from on to off.

9.1.0.210

The "stream plan check failed" is displayed after the upgrade.

After the value partition plan changes, the value partition plan generated by WindowAgg is inconsistent between the upper node and the lower node.

8.2.1.225

After a DR switchover, the error message "xlog flush request" is displayed in the primary cluster.

Only the primary CN in the DR cluster is backed up. The full or incremental backup type is recorded in the metadata file of the backup set. When the DR cluster is restored, the metadata file information is read to determine whether to clear the CN directory (cleared during full restoration). CN IDs in the primary and DR clusters may be different. If the backup type is queried based on the ID, the incorrect information may be read. As a result, necessary files are not cleared during full restoration, and residual files affect services.

9.1.0

Turbo problem hardening

Some unconventional scenarios (for example, multiple UNION ALL and inconsistent data types) are not considered. As a result, an exception occurs during a query.

9.1.0

Patch 9.1.0.215 (March 2025)

This is a patch version that fixes known issues.

Table 2 New features/Resolved issues in patch 9.1.0.215

Category

Feature or Resolved Issue

Cause

Version

Handling Method

New features

None.

-

-

-

Resolved issues

Fixed the problem where the intelligent O&M scheduler triggered data flushes to disks.

The intelligent O&M scheduler sends the pgxc_parallel_query function to each DN to check the table size. The DN query results include auxiliary tables like CUDesc and Delta for column-store tables. Each partition also has its own CUDesc and Delta tables. This makes the final result set large.

The DN sends a summary of the query results to the CN. If the result set is too large, this triggers a disk flush. This can slow down the query or fill up the disk.

8.3.0.100

Upgrade the version to 9.1.0.215 or later.

Fixed the issue where the storage-compute decoupled V3 table would sometimes crash if DiskCache was turned off.

If the storage-compute decoupled V3 table is used, DiskCache is disabled, and the service thread fails to get space while reading OBS objects, the transaction rolls back. This can cause errors.

9.1.0

Fixed the issue where the CM component could not restart the cluster if a node was subhealthy (hung).

cm_ctl starts a cluster in two phases:

  1. Check each node's status by connecting with PSSH.
  2. Delete each nodes' start and stop files by connecting with PSSH.

The SSH command misreads its parameters. This causes a single faulty node check to take over 300 seconds. If multiple nodes are faulty, the time needed grows quickly. Eventually, the command fails to deliver.

8.1.3

Fixed the issue where, in data lake scenarios, too many concurrent requests (over 3,000) slowed connections between CNs and DNs, resulting in low overall performance.

One file is allocated to only one DN. However, in the execution plan, the CN still establishes connections with all DNs. In data lakes, a single cluster needs to support more than 3,000 concurrent requests, which increases the overhead.

9.1.0

Fixed the issue where a long write process field in a data lake sometimes caused the DN process to malfunction.

If the write process field is too long in a data lake, an error occurs. During rollback, the system calls the relevant logic. But some memory is already freed, leading to a function error and shutting down the DN process.

9.1.0

Fixed the SQL injection vulnerability (CVE-2025-1094).

Your product may have the PostgreSQL SQL injection vulnerability (CVE-2025-1094).

9.1.0

Fixed the issue where query errors sometimes happened in row-column mixed scenarios when using the NestLoop and Stream operators.

The NestLoop operator includes the stream operator. First, run the materialization operator on the inner side. In a mixed row-store and column-store setup, the Row Adapter operator appears with the materialized operator. This prevents the materialized operator from running, leading to a hang.

8.3.0.108

Fixed the issue where the DN connection was checked during the communication thread's idle time, causing occasional slow database responses.

The communication thread checks all connections to other DNs during idle periods. Each check takes 1 ms. In large clusters with over 100 connections, this can take more than 100 ms, delaying the next packet send. This can cause occasional slowdowns in services that need fast response times.

9.1.0

Patch 9.1.0.213 (February 2025)

This is a patch version that fixes known issues.

Table 3 New features/Resolved issues in patch 9.1.0.213

Category

Feature or Resolved Issue

Cause

Version

Handling Method

New features

After migrating Hive data to GaussDB(DWS), you can choose whether to automatically convert an empty string to 0 in MySQL compatibility mode.

-

-

-

Resolved issues

Temporary tables are not cleared when external tables are involved in INSERT OVERWRITE INTO during JDBC connection.

The access process of the JDBC connection involves multiple phases (parser, bind, and exec). In the parser phase, the SQL statement is rewritten, creating a temporary table for INSERT OVERWRITE. If an external foreign table is used, the rewrite operation is repeated in the exec phase, meaning the temporary table from the parser phase is not deleted. This results in two temporary tables being created, with only the one from the later phase being deleted, causing temporary table residue.

9.1.0.211

Upgrade the version to 9.1.0.213 or later.

The service receives a 100% skew alarm. The skew content is OVERWRITE temporary table skew.

When INSERT OVERWRITE is executed or the distribution key is changed, a temporary common table is generated to store the data from the source table. If there is data skew, an alarm is triggered for the temporary table name, leading to a failure in the analysis process.

9.1.0.211

Resolved the result set problem caused by data precision alignment in the Turbo engine.

If certain data nodes scan zero rows in the base table, the result set sent back to the CN is NULL with a default of 0 decimal places. When the result set is combined with the sum result returned by DNs containing data, the precision of the aligned data is inaccurately calculated. Consequently, the final sum result is stored as int64 type, leading to an unexpected result set.

9.1.0.212

Fixed the issue that the system does not check whether the partition path is empty when an external foreign table is used to check the partition path.

When the external table of the HDFS server checks whether the partition path corresponds to the partition field definition, the system does not check whether the partition path is empty.

9.1.0.211

Fixed the issue where the null pointer return value was not handled when using an external foreign table to retrieve partition information from Hive MetaStore.

The partition values obtained from Hive MetaStore are null.

9.1.0.212

The time zone conversion result of the convert_tz function does not meet the expectation in a certain scenario.

Compatibility with MySQL is not considered when the convert_tz function is used. As a result, the result is not as expected.

9.1.0.210

Fixed the issue that the substr result set was incorrect when LLVM was enabled.

GBK data can be imported into the ASCII code database. When LLVM is utilized, the bottom layer of LLVM does not verify the invocation of substr(a, start_index, len) on GBK data columns. This results in miscalculating the character width of GBK as 4 instead of 2 due to reusing UTF-8 character width logic.

8.1.3.x

Resolved the result set problem caused by incorrect string processing when using the inlist-to-hash feature in the Turbo engine.

Changing the character string from attlen16 to attlen–1 in the uniq hash table in the Turbo engine incorrectly employs the strlen interface to determine the string length. In the hstoreopt delta table, if attlen is modified from –1 to a fixed length, preliminary batch conversion is necessary.

9.1.0.212

Patch 9.1.0.212 (January 2025)

This is a patch version that fixes known issues.

Real-time data warehouse

  1. Resolved the issue that the result set of the date type query is pushed down in MySQL-compatible mode.
  2. Fixed the issue that the result set is incorrect when limit is set to null or all.
  3. Fixed the issue of incorrect statistics resetting, inability to trigger auto vacuum, and delayed space reclamation.
  4. Resolved the deadlock problem related to refreshing materialized views and concurrent DDL operations.
  5. Resolved the issue where data in the original table was mistakenly deleted when a temporary table was manually removed after an error occurred during the redistribution of a cold or hot table during scale-out.
  6. Resolved the problem of local disk space increase during cold and hot table scale-out.

Lakehouse

  1. Supported the special character ';' in the path for a foreign table to access OBS.
  2. Optimized the task allocation for querying Parquet foreign tables to enhance the disk cache hit ratio.

Backup and restoration

  1. Fixed the problem of intermediate status files remaining during backup and restoration, which would occupy disk space.
  2. Resolved the backup failure issue when the elastic VW is present.
  3. Supported backup and restoration for cold and hot tables, prolonging the backup and restoration time.

Ecosystem compatibility

  1. Fixed the issue that the PostGIS plug-in may fail to be created.

O&M improvement

  1. Resolved the issue that SQL monitoring metrics are incompletely collected.
  2. Resolved the memory leak problem of the secondary node.
  3. Fixed the issue that intelligent O&M is not started on time.
  4. Fixed the problem where the scheduler could not be properly scheduled due to residual data caused by a failed database drop.
  5. Fixed the issue of high communication memory usage during high concurrency.
  6. Resolved the performance problem caused by residual sequences on the GTM when an exception occurs.

Behavior changes

  1. To prevent errors during complex SQL execution, we disabled the ANALYZE feature of the predicate column during upgrades or new installations.
  2. In the previous version, if there was an abnormal network connection to the GTM during a drop table operation in a scenario where the table definition contained a sequence column, a warning would be reported. Although the drop table operation could be successfully executed, the sequences might remain on the GTM. However, in the new version, an error is reported when dropping the table, requiring a retry. GaussDB(DWS) now supports dropping tables within transaction blocks. If a drop table statement is successfully executed but the transaction is rolled back, the sequences will be deleted from the GTM, but the table will still exist on the CN. In this case, the table needs to be dropped again to avoid errors indicating that the sequences do not exist.
  3. The truncate operation can proactively terminate SELECT operations in case of lock conflicts, and this feature is disabled by default. In the previous version, if the session executing the SELECT statement was terminated, it would cause an error, but the connection would remain open. However, in the new version, the session executing the SELECT statement is automatically closed, and the service requires reconnection.

Patch 9.1.0.211 (December 13, 2024)

This is a patch version that fixes known issues.

Version 9.1.0.210 (November 25, 2024)

Storage-compute decoupling

  1. You can use the explain warmup command to preload data into the local disk cache, either at the cold or hot end.
  2. The enhanced elastic VW function offers more flexible ways to distribute services. Services can be distributed to either the primary VW or the elastic VW by CN.
  3. Storage-compute decoupled tables support parallel insert operations, improving data loading performance.
  4. The storage-compute decoupled table has a recycle bin feature. This allows you to quickly recover from misoperations such as dropping or truncating a table or partition.
  5. Both hot and cold tables can utilize disk cache and asynchronous I/Os to improve performance.

Real-time data warehouse

  1. The performance for limit...offset page turning and inlist operations has been significantly improved.
  2. The Binlog feature is now available for commercial use.
  3. Automatic partitioning now supports time columns of both integer and variable-length types.

Lakehouse

  1. Parquet/ORC read and write now support the zstd compression format.
  2. The create table like command now allows using a table from an external schema as the source table.
  3. Foreign tables can be exported in parallel.

High availability

  1. Storage-compute decoupled tables and hot and cold tables support incremental backup and restoration.
  2. In storage-compute decoupling scenarios, parallel copy is used to increase backup speed.

Ecosystem compatibility

  1. The system is compatible with the replace into syntax of MySQL and the interval time type.
  2. The pg_get_tabledef export function now displays comments.

O&M and stability improvement

  1. When disk usage is high, data can be dumped from the standby node to OBS.
  2. When the database is about to become read-only, certain statements that write to disks and generate new tables and physical files are intercepted to quickly reclaim disk space and ensure the execution of other statements.
  3. Audit logs can be dumped to OBS.
  4. The lightweight lock view pgxc_lwlocks is added.
  5. The common lock view now includes lock acquisition and wait time stamps.
  6. The global deadlock detection function is now enabled by default.
  7. A lock function is added between VACUUM FULL and SELECT.
  8. The expiration time has been added to gs_view_invalid to assist O&M personnel in clearing invalid objects.

Constraints

  1. The maximum number of VWs supported is 256, with each VW supporting a maximum of 1,024 DNs. It is best to have no more than 32 VWs, with each VW containing no more than 128 DNs.
  2. OBS storage-compute decoupled tables do not support DR or fine-grained backup and restoration.

Behavior changes

  1. Enabling the max_process_memory adaptation during the upgrade and using the active/standby mode will increase the available memory of DNs.
  2. By default, data consistency check is enabled for data redistribution during scale-out, which increases the scale-out time by 10%.
  3. Create an Hstore_opt table with the Turbo engine enabled and retain the default value middle for the compression level.
  4. By default, the OBS path of a storage-compute decoupled table is displayed as a relative path.
  5. To use the disk cache, enable the asynchronous I/O parameter.
  6. The interval for clearing indexes of column-store tables has been changed from 1 hour to 10 minutes to quickly clear the occupied index space.
  7. CREATE TABLE and ALTER TABLE do not support columns with the ON UPDATE expression as distribution columns.
  8. During Parquet data query, the timestamp data saved in INT96 format is not adjusted for 8 hours.
  9. max_stream_pool is used to control the number of threads cached in the stream thread pool. The default value is changed from 65525 to 1024 to prevent idle threads from using too much memory.
  10. The track_activity_query_size parameter takes effect upon restart instead of dynamically.
  11. The logical replication function is no longer supported, and an error will be reported when related APIs are called.

Patch 9.1.0.105 (October 23, 2024)

This is a patch version that fixes known issues.

Patch 9.1.0.102 (September 25, 2024)

This is a patch version that fixes known issues.

Upgrade

  1. Upgrade from 9.0.3 to 9.1.0 is supported.

Fixed known issues

  1. Supported alter database xxx rename to yyy in the storage-compute decoupling version.
  2. Fixed the problem of incorrect display of storage-compute decoupling table's \d+ space size.
  3. Fixed the problem of asynchronous sorting not running post backup and restoration.
  4. Fixed the problem of inability to use Create Table Like syntax after deleting the bitmap index column.
  5. Fixed the performance rollback problem in Turbo engine's group by scenario caused by hash algorithm conflicts.
  6. Maintained the scheduler processes' handling of failed tasks in the same manner as version 8.3.0.
  7. Fixed the problem of pg_stat_object space expansion in fault scenario.
  8. Fixed the problem of DataArts Studio reporting an error when delivering a Vacuum Full job after upgrading from 8.3.0 to 9.1.0.
  9. Fixed the problem of high CPU and memory usage during JSON field calculation.

Enhanced functions

  1. ORC foreign tables support the ZSTD compression format.
  2. GIS supports the st_asmvtgeom, st_asmvt, and st_squaregrid functions.

Version 9.1.0.100 (August 12, 2024)

Elastic architecture

  1. Architecture upgrade: The storage-compute decoupling architecture 3.0, based on OBS, introduces layered and elastic computing and storage, with on-demand storage charging to reduce costs and improve efficiency. Multiple virtual warehouses (VWs) can be deployed to enhance service isolation and resolve resource contention.
  2. The elastic VW feature, which is stateless and supports read/write acceleration, addresses issues like insufficient concurrent processing, unbalanced peak and off-peak hours, and resource contention for data loading and analytics. For details, see Elastically Adding or Deleting a Logical Cluster.
  3. Both auto scale-out and classic scale-out are supported when adding or deleting DNs. Auto scale-out does not redistribute data on OBS, while classic scale-out redistributes all data. The system automatically selects the scale-out mode based on the total number of buckets and DNs.
  4. The storage-compute decoupling architecture (DWS 3.0) enhances performance with disk cache and asynchronous I/O read/write. When the disk cache is fully utilized, performance matches that of the storage-compute integration architecture (DWS 2.0).
Figure 1 Decoupled storage and compute

Real-time processing

  1. Launched the vectorized Turbo acceleration engine, doubling the performance of tpch 1000x.
  2. Upgraded version of hstore, called hstore_opt, offers a higher compression ratio and works in conjunction with the Turbo engine to reduce storage space by 40% when compared to column storage.
  3. With Flink, you can connect directly to DNs to import data into the database. This results in linear performance improvement in batch data import scenarios. For details, see Real-Time Binlog Consumption by Flink.
  4. GaussDB(DWS) supports Binlog (currently in beta) and can be used in conjunction with Flink to enable incremental computing. For details, see Subscribing to Hybrid Data Warehouse Binlog.
  5. This update significantly improves full-column performance while reducing resource consumption.
  6. GaussDB(DWS) supports materialized views (currently in beta). For details, see CREATE MATERIALIZED VIEW.
  7. To improve coarse filtering, the Varchar/text column now supports bitmap index and bloom filter. When creating a table, you must specify them explicitly. For details, see CREATE TABLE.
  8. To enhance performance in topK and join scenarios, the runtime filter feature is now supported. You can learn more about GUC parameters runtime_filter_type and runtime_filter_ratiox in Other Optimizer Options.
  9. GaussDB(DWS) supports asynchronous sorting to enhance the min-max coarse filtering effect of PCK columns.
  10. The performance in the IN scenario is greatly improved.
  11. ANALYZE supports incremental merging of partition statistics, collecting only statistics on changed partitions and reusing historical data, which improves execution efficiency. It collects statistics only on predicate columns.
    • The CREATE TABLE syntax now includes the incremental_analyze parameter to control whether to enable incremental ANALYZE mode for partitioned tables. For details, see CREATE TABLE.
    • The enable_analyze_partition GUC parameter determines whether to collect statistics on a partition of a table. For details, see Other Optimizer Options.
    • The enable_expr_skew_optimization GUC parameter controls whether to use expression statistics in the skew optimization policy. For details, see Optimizer Method Configuration.
    • ANALYZE | ANALYSE
  12. GaussDB(DWS) supports large and wide tables, with a maximum of 5,000 columns.
  13. Create index/reindex supports parallel processing.
  14. The pgxc_get_cstore_dirty_ratio function is added to obtain the dirty page rate of CU, Delta, and CUDesc in the target table (only hstore_opt is supported).

[Convergence and unification]

  1. One-click lakehouse: You can use create external schema to connect to the HiveMetaStore metadata, avoiding complex create foreign table operations and reducing maintenance costs. For details, see Accessing HiveMetaStore Across Clusters.
  2. GaussDB(DWS) allows for reading and writing in Parquet/ORC format, as well as overwriting, appending, and multi-level partition read and write.
  3. GaussDB(DWS) allows for reading in Hudi format.
  4. Foreign tables support concurrent execution of ANALYZE, significantly improving the precision and speed of statistics collection. However, foreign tables do not support AutoAnalyze capabilities, so it is recommended to manually perform ANALYZE after data import.
  5. Foreign tables can use the local disk cache for read acceleration.
  6. Predicates such as IN and NOT IN can be pushed down for foreign tables to enhance partition pruning.
  7. Foreign tables now support complex types such as map, struct, and array, as well as bytea and blob types.
  8. Foreign tables support data masking and row-level access control.
  9. GDS now supports the fault tolerance parameter compatible_illegal_char for exporting foreign tables.
  10. The read_foreign_table_file function is added to parse ORC and Parquet files, facilitating fault demarcation.

High availability

  1. The fault recovery speed of the unlogged table is greatly improved.
  2. Backup sets support cross-version restoration. Fine-grained table-level restoration supports restoration of backup sets generated by clusters of earlier versions (8.1.3 and later versions).
  3. Fine-grained table-level restoration supports restoration to a heterogeneous cluster (the number of nodes, DNs, and CNs can be different).
  4. Fine-grained restoration supports permissions and comments. Cluster-level and schema-level physical fine-grained backups support backup permissions and comments, as do table-level restorations and schema-level DR.

Space saving

  1. Column storage now supports JSONB and JSON types, allowing JSON tables to be created as column-store tables, unlike earlier versions which only supported row-store tables.
  2. Hot and cold tables support partition-level index unusable, saving local index space for cold partitions.
  3. The upgraded hstore_opt provides a higher compression ratio and, when used with the Turbo engine, saves 40% more space compared to column storage.

O&M and stability improvement

  1. The query filter is enhanced to support interception by SQL feature, type, source, and processed data volume. For details, see CREATE BLOCK RULE.
  2. GaussDB(DWS) now automatically frees up memory resources by reclaiming idle connections in a timely manner. You can specify the syscache_clean_policy parameter to set the policy for clearing the memory and number of idle DN connections. For details, see Connection Pool Parameters.
  3. The gs_switch_respool function is added for dynamic switching of the resource pool used by queryid and threadid. This enables dynamic adjustment of the resources used by SQL. For details, see Resource Management Functions.
  4. The pg_sequences view is added to display the attributes of sequences accessible to the current user.
  5. The following functions are added to allow you to query information about all chunks requested by the memory in a specified shared memory:
  6. The pgxc_query_resource_info function is added to display the resource usage of the SQL statement corresponding to a specified query ID on all DNs. For details, see pgxc_query_resource_info.
  7. The pgxc_stat_get_last_data_access_timestamp function is added to return the last access time of a table. This helps the service to identify and clear tables that have not been accessed for a long time. For details, see pgxc_stat_get_last_data_access_timestamp.
  8. SQL hints support more hints that provide better control over the generation of execution plans. For details, see Configuration Parameter Hints.
  9. Performance fields are added to top SQL statements that are related to syntax parsing and disk cache. This makes it easier to identify performance issues. For details, see Real-time Top SQL.
  10. The preset data masking administrator has the authority to create, modify, and delete data masking policies.
  11. Audit logs can record objects that are deleted in cascading mode.
  12. Audit logs can be dumped to OBS.

Ecosystem compatibility

  1. if not exists can be included in the create schema, create index, and create sequence statements.
  2. The merge into statement now allows for specified partitions to be merged. For details, see MERGE INTO.
  3. In Teradata-compatible mode, trailing spaces in strings can be ignored when comparing them.
  4. GUC parameters can be used to determine if the n in varchar(n) will be automatically converted to nvarchar2.
  5. PostGIS has been upgraded to version 3.2.2.

Restrictions

  1. A maximum of 256 VWs are supported, each with 1,024 DNs. It is recommended to have no more than 32 VWs and 128 DNs.
  2. DR is not supported by OBS tables that have decoupled storage and compute. Only full backup and restoration are available.

Behavior changes

  1. The keyword status is added. Avoid using status as a database object name. If changing the column alias to status causes a service error, add AS to the service statement to fix it. Here is an example:
    1
    2
    3
    4
    SELECT c1, min (c2) status, c3 from t1;  //Error SQL
    ERROR: syntax error at or near "status"
    
    SELECT c1, min(c2) AS status, c3 from t1;  //SQL statement for avoiding the problem. Add AS.
    

  2. VACUUM FULL, ANALYZE, and CLUSTER are only supported for individual tables, not the entire database. Even though there are no syntax errors, the commands will not be executed.
  3. OBS tables with decoupled storage and compute do not support delta tables. If enable_delta is set to on, no error is reported, but delta tables do not take effect. If a delta table is required, use the hstore-opt table instead.
  4. By default, NUMA core binding is enabled and can be turned off dynamically using the enable_numa_bind parameter.
  5. Upgrading from version 8.3.0 Turbo to version 910 changes the numeric(38) data type in Turbo tables to numeric(39), without affecting display width. Rolling back to the previous version will not reverse this change.
  6. Due to the decoupling of storage and compute, the EVS storage space in DWS 3.0 is half that of DWS 2.0 by default. For example, purchasing 1 TB of EVS storage provides 500 GB in DWS 3.0 for active/standby mode, compared to 1 TB in DWS 2.0. When migrating data from DWS 2.0 to DWS 3.0, the EVS storage space required in DWS 3.0 is twice that of DWS 2.0.