8.x Versions
Date |
Feature |
Description |
---|---|---|
2024-04 |
For O compatibility, aggregation-related syntax, table update based on views and subqueries, and comparison operators with spaces are supported. |
|
For O compatibility, system functions and system views are supported. |
Based on the existing O compatibility, some system functions and system views are supported, including:
|
|
For O compatibility, encoding exceptions and hybrid encoding of special characters are supported. |
|
|
For O compatibility, stored procedures support synonyms, subtypes, dynamic anonymous blocks, and triggers, enhancing commercial capabilities. |
In terms of O compatibility, the following contents are added:
|
|
O compatibility supports cross-type integer comparison, bpchar fuzzy match, and system function matching policy optimization. |
In O-compatible mode:
|
|
For O compatibility, the XMLGEN, STATS, and DESCRIBE advanced packages are supported. |
In O-compatible mode, some APIs are supported in the DBMS_XMLGEN, DBMS_STATS, and DBMS_DESCRIBE advanced packages. |
|
For M compatibility, commercial requirements such as data types and syntax functions are supported. |
|
|
For M compatibility, new frameworks and protocols are compatible for commercial use. |
The new M-compatible framework supports full compatibility with MySQL database syntax in the future, avoiding syntax isolation and forward compatibility such as syntax and keyword occupation in the old framework. The function operator behavior is the same as that of MySQL databases, and the MySQL protocol is supported. |
|
For M compatibility, the existing syntax adapts to the new framework and supports commercial use. |
The new M-compatible framework uses the hook mechanism to implement the compatibility function in independent extension and isolate the compatibility function from the GaussDB main process to avoid forward compatibility issues caused by intrusive modification. This feature synchronizes the existing 107 SQL commands to the new framework. |
|
The JDBC driver supports the streaming read capability. |
The GaussDB JDBC driver supports the streaming read capability. In streaming read mode, JDBC does not generate OOM. |
|
The JDBC driver supports JDK 1.7 and the enhanced JDBC O&M capability. |
|
|
The commercial performance of the default configuration parameters is not lower than 1 million tpmC. |
The performance of default GaussDB configuration parameters is optimized. The performance value of default GaussDB configuration parameters measured by the standard benchmark (TPC-C) is improved to no less than 1 million tpmC. The capability of locating performance issues is improved. |
|
Based on ADIO, the performance is improved by 20% in typical large-capacity scenarios. |
In large-capacity scenarios, the AIO-DIO technology and doublewrite removal function are used to fully utilize I/O resources to improve database performance by more than 20%. In addition, online switching from the BIO mode to the ADIO mode is supported. |
|
The performance of large concurrent write transactions in centralized mode is improved by 50%. |
|
|
The performance is optimized by 15% based on stored procedures in typical batch processing scenarios. |
The noise floor of the stored procedure is optimized to support SQLBYPASS. |
|
Concurrent cursor query is supported, improving performance by more than 30% in typical scenarios. |
Cursors can be concurrently queried to improve cursor usage efficiency and insert select parallel performance in Ustore. |
|
Based on the window function, the performance is improved by six times in typical page turning scenarios. |
The projection column of a subquery contains a window function, and the parent query contains filter criteria for the window function. This feature allows the outer filter criteria to be pushed down to the inner subquery. |
|
For Codegen commercial use, expressions are heavily used in TPC-H calculation, improving typical query performance by 20%. |
The commercial use capability of the Codegen is improved. The Codegen is enabled by default to solve the calculation performance problem of complex query expressions. |
|
Parallel scanning of predicate indexes is supported. In typical scenarios, the performance is 10% higher than that of PG16. |
Parallel index scan with predicates (IndexScan and IndexOnlyscan) is supported to improve performance in typical scenarios. |
|
Local partitioned indexes can be created offline and concurrently between Astore partitions. |
Inter-partition parallelism is supported. During local partitioned index creation, steps such as scanning, sorting, and B-tree insertion are performed in parallel. The overall performance (when partition data is evenly distributed) is better than that of the parallel creation solution in the current partition. |
|
SPM supports restoration of complex SQL statements. |
Based on the plan management function supported by SPM, the following enhancements are made:
|
|
DR switchover stability achieves 99% in typical scenarios, ensuring service recovery within 5 minutes. |
The internal implementation mechanism and performance are optimized based on typical DR scenarios, effectively improving the DR switchover performance and stability. |
|
Arterial detection model for first aid is first put into commercial use, which supports slow disk detection. |
The arterial detection model is built to identify arterial subhealth problems and provide corresponding measures to improve database HA. |
|
Client service statements can be terminated with socketTimeout. |
When the client is disconnected due to timeout, the GaussDB server can detect the disconnection in a timely manner and terminate the running service statements corresponding to the connection. This prevents session resource stacking and service loss caused by retry due to socket timeout on the service side. |
|
Automatic repair based on physical bad blocks: Pages on the standby node can be repaired from the primary node in seconds. |
|
|
PITR modular decoupling and key scenario locating and demarcation are improved. |
|
|
Automatic list/range partitioning is supported for commercial use. |
|
|
Row-store compression supports page-level LMT. |
After advanced compression is enabled and an ILM policy is specified for a table, background scheduling is periodically started to scan all rows. After data is frozen, the current timestamp is used as the last modification time of frozen tuples to determine hot and cold data. There is a difference between the timestamp information and the actual last modification time of the tuples. To accurately indicate the LMT, the timestamp corresponding to the LSN of the page where the tuples are located is used to indicate the LMT of the tuples. The timestamp is used as the time basis for determining whether the tuples are cold or hot. |
|
Based on stored procedures, the global compilation memory usage is reduced by 30% in typical scenarios with a large number of concurrent requests. |
In the scenario of a large number of concurrent requests, stored procedures occupy a large amount of memory. Therefore, some improper memory usage needs to be optimized, mainly including structure arrays related to the number of parameters or memory sharing. The memory usage mainly refers to the type description of variables in stored procedures to reduce the memory usage, the concurrent database scale-out capability is improved. |
|
In typical scenarios with 4 vCPUs and 16 GB of memory, the CPU noise floor of the CM component decreases by 2.75% and the memory usage decreases by 46%. |
The CPU and memory of the CM component are optimized for small-scale deployment in typical scenarios with 4 vCPUs and 16 GB of memory. |
|
Ustore supports efficient storage of flexible fields. |
Enhanced TOAST is a technology used to process oversized fields. It reduces redundant information in TOAST pointers to allow more oversized columns in a single table, and optimizes the mapping between the main table and off-line storage tables. You do not need to use pg_toast_index to store the relationship between main table data and off-line storage table data, reducing storage space. Enhanced TOAST enables split data to be self-linked, eliminating the dependency of the original chunk ID allocation process and greatly improving the write performance. |
|
Ustore supports large-scale commercial use of TOAST. |
|
|
TDE supports index encryption and RLS supports expression indexes. |
|
|
Sensitive data discovery is put into commercial use for the first time, enhancing privacy protection, and providing high security capabilities. |
The sensitive data discovery function is implemented by calling functions. By calling different functions, you can specify the scan object and sensitive data classifier to obtain sensitive data of different levels corresponding to the scan object. |
|
Tamper-proof Ustore is put into commercial use for the first time. |
Ustore can use the tamper-proof ledger database function. |
|
ABO supports feedback and multi-table cardinality. In typical slow query scenarios, the performance is improved by five times. Cost adaptation is supported. In scenarios where operator selection is inaccurate, the performance is improved by one time. |
The adaptive cost estimation function provides the cost estimation capability based on the usual mixed model (UMM) and cost parameter model. The load monitoring monitors model accuracy, implements fast and efficient load management and incremental model update, and ensures the estimation accuracy. The real-time and efficient predicate query feature helps identify the optimal cardinality estimation policy. This feature helps solve the problem that the cost estimation is distorted and the plan is not optimal when the data and execution environment change on the live network. |
|
An exact row number is displayed when a compilation error is reported. |
The logic for calculating line numbers is adjusted to solve the problem that the line number of the function header is separated from that of the function body and line numbers are incorrectly calculated. In this way, the error line number can be obtained accurately. |
|
Hot patches can be installed for advanced packages. |
This feature provides the capability of installing hot patches for advanced packages. |
|
Built-in flame graphs support quick performance analysis and locating. |
|
|
The duration for locating underlying storage exceptions is shortened from weeks to days to solve the problem of missing dirty pages. |
The verification and DFX capabilities are added to check for missing dirty pages, improving the fault locating and demarcation efficiency when the underlying storage returns an incorrect version. The fault locating duration is shortened from weeks to days.
|
|
In typical service scenarios, read success on the standby node is 100% and the time for locating read problems on the standby node is shorten from weeks to days. |
|
|
The troubleshooting and demarcation duration of typical communication module problems is shortened from weeks/days to hours/minutes. |
|
|
Memory-overcommitted session printing is supported. |
A threshold is provided. When the memory usage of a single session or SQL statement exceeds the threshold, detailed memory information (DFX information indicating that the memory usage of a single SQL statement exceeds the threshold) is printed. |
|
The DFX performance view supports refined db_time/wait event duration statistics. |
The wait event types of GaussDB modules are optimized to provide more comprehensive fault locating methods.
|
|
The storage space usage of WDR snapshots is reduced by 40% in typical scenarios. |
The WDR snapshot storage space occupation and snapshot space control methods are optimized. |
|
Astore supports commercial use of hash bucket-based online scale-out. |
The online scale-out technology based on the hash bucket table (Astore) is supported. The segment-page database-level data sharding and dynamic log multi-stream technologies are used to implement the online cluster scale-out solution for physical file migration. Large-scale commercial use is supported. |
|
In terms of segment-page, Astore supports commercial use of hash bucket-based online scale-out. |
|
|
Vector databases support efficient retrieval of hundreds of millions of vectors. |
The vector database integrates the optimal disk-based vector retrieval algorithm into GaussDB, enabling database users to use native SQL statements to create and import vector data, build indexes, generate query plans, and efficiently retrieve vectors. You can use vector types (FloatVector and BoolVector) as native types. You can create GsIVFFLAT or GsDiskANN indexes to accelerate TopK ANN queries. The GsIVFFLAT index divides the high-dimensional vector space into different buckets based on distances by using a clustering algorithm. Then, in a vector retrieval process, a candidate vector retrieval set may be selected based on a distance between a query vector and a vector bucket center point. Therefore, the retrieval cost is significantly reduced compared with that of a full scan. The GsDiskANN index searches for the nearest neighbor vector for all vector points, constructs a sparse graph structure, and uses the relaxation coefficient to generate some "short-circuit" edges to accelerate query. GaussDB optimizes the data deletion logic to implement real-time deletion and ensure performance with no accuracy loss during long-term running. GaussDB vector indexing supports high-concurrency retrieval and modification and two underlying storage engines: Astore and Ustore. In addition, it also supports MVCC. Log types are introduced to the GaussDB vector database to support complete HA capabilities, such as primary and standby synchronization in a centralized system, parallel replay, and ultimate RTO. |
|
openGauss & PG keyword rectification |
Medium- and low-risk items related to openGauss and PG keywords have been rectified, and related keyword descriptions have been deleted. |
|
Resolved issues |
The following issues are resolved:
|
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot