Help Center/ GaussDB(DWS)/ Product Bulletin/ Product Notice/ Version 8.2.1/ New Features and Resolved Issues in 8.2.1.x
Updated on 2024-12-18 GMT+08:00

New Features and Resolved Issues in 8.2.1.x

8.2.1.230

Table 1 New features/Resolved issues in version 8.2.1.230

Category

Feature or Resolved Issue

Cause

Version

Handling Method

New features

Fine-grained backup and restoration support online DDL operations, allowing tables to be modified while performing backups.

-

-

-

Fine-grained table-level restoration allows tables to be restored to a heterogeneous cluster, regardless of any topology consistency between the target and restored clusters.

-

-

-

Fine-grained backup and restoration support cross-version restoration.

-

-

-

Cluster-level and schema-level physical fine-grained backup support the backup of permissions and comments.

-

-

-

During the redistribution phase of online scale-out, the priority can be adjusted dynamically.

-

-

-

Resolved issues

Catchup conflicts with the DDL service lock and can last a long time.

If DDL operations are performed in a stored procedure, the catchup operation might not end before the transaction is submitted, resulting in a lock timeout error.

8.0.x

Upgrade to 8.2.1.230.

The WITH RECURSIVE statement runs indefinitely.

In the Arm environment, if thread information synchronization is disrupted, variables may not be updated at the same time.

8.1.3.322

The memory usage of gs_wlm_readjust_relfilenode_size_table is high.

The pg_relfilenode_size table is completely loaded to the memory, occupying too much memory.

8.1.3.323

As the cluster runs for an extended period, the memory usage of TopMemoryContext rises.

After the stream thread returns to the thread pool, there is a delay in releasing the memory, resulting in a gradual increase in the TopMemoryContext memory usage.

8.2.1.22

The SQL statement execution stops unexpectedly, and the error message " canceling statement due to coordinator request" is displayed.

An error occurs when executing a statement with the stream operator. The cancel message is sent to the substream thread, which has already returned to the stream thread pool. The next query reuses the stream thread. Due to the lack of strong consistency verification on the query ID, the residual signal from the previous statement is responded to, causing the statement to terminate abnormally.

8.1.3.110

During the schema space query, the value of usedspace exceeds that of permspace.

When determining the maximum limit of the schema space, the system compares the used space with the limit. Consequently, the actual usage surpasses the limit.

Versions earlier than 8.2.1.230

The service fails to be executed when the max_files_per_node parameter is set to -1.

During SQL execution, when a stream thread is created, the system reads the max_files_per_node parameter, which defaults to 50,000. An error message indicates that the number of handles exceeds the limit.

So even if guc is set to -1, it does not take effect.

8.1.3.321

The error message "Stream plan check failed." is displayed during SQL statement execution. Execution datanodes list of stream node mismatch in parent node.

In the process of generating a plan, any changes made to a plan node will affect all nodes, as the lower-layer plan node relies on the upper-layer plan node.

Versions earlier than 8.2.1.230

Service statements cannot be terminated when xc_maintenance_mode is disabled during redistribution after scale-out.

During the scale-out redistribution phase, functions like pg_cancel_query and pg_cancel_backend can only be used when xc_maintenance_mode is enabled, meaning user service statements cannot be terminated during redistribution.

Versions earlier than 8.2.1.230

8.2.1.225

Table 2 New features/Resolved issues in version 8.2.1.225

Category

Feature or Resolved Issue

Cause

Version

Handling Method

New features

None

-

-

-

Resolved issues

Replacement of GDS invalid characters fails.

When invalid GDS characters are replaced with special characters (�), an exception occurs. This is because the replacement changes the string length, but the original length is still used in subsequent processing. As a result, some characters are truncated and cannot be replaced correctly.

Versions earlier than 8.2.1.225

Upgrade to 8.2.1.225.

During concurrent pressure tests, gather performance occasionally deteriorates.

If a statement includes the stream operator, multiple stream threads are generated on DNs. The topConsumer thread, which integrates substream thread data and sends it to the CN, can only clear the stream thread group after all substream threads exit.

Versions earlier than 8.2.1.225

8.2.1.223

Table 3 New features/Resolved issues in version 8.2.1.223

Type

Feature or Resolved Issue

Cause

Version

Handling Method

New features

None

-

-

-

Resolved issues

Cluster hang detection triggers a switchover.

Before signal reconstruction, the unreliable SIGUSR2 was used for IPC. After reconstruction, reliable signals 34 and 35 are used. However, sending too many signals increases the likelihood of timer creation failures.

8.2.1.220

Upgrade to 8.2.1.223.

A core dump (GsCgroupIsClass) occurs when pgxc_cgroup_reload_conf messages are sent concurrently.

This happens because of unlocked pointer access. When the reload function modifies the pointer, it results in a wild pointer access and a core dump.

8.2.1.220

The table size reported by the gs_table_distribution function differs significantly from the actual size.

This happens when data in the pg_refilenode_size system catalog is read in batches and calculated, causing repeated accumulation of the current batch's table size.

8.2.1.220

Executing an SQL statement may result in the error "Could not open file 'pg_clog/000000000075'".

After VACUUM FULL on a column-store table, clogs may be prematurely reclaimed, making them inaccessible during ANALYZE after an active/standby switchover.

8.2.1.119

The issue of freememory showing a large negative value due to uninitialized temporary variables has been resolved.

Declaring a temporary variable without assigning a value led to unexpected parameter values and excessively negative memory usage, causing test case failures when the network adapter was faulty.

8.2.1.220

After configuring VACUUM FULL for intelligent O&M, the actual execution time can exceed the configured range.

When the scheduler kills the VACUUM FULL task, a new task is inserted, preventing complete execution of the kill task.

8.1.3.x

8.2.1.220

Table 4 New features/Resolved issues in version 8.2.1.220

Type

Feature or Resolved Issue

Cause

Version

Handling Method

New features

  • MERGE INTO allows for specified partitions.
  • Plan management is supported.
  • GDS now supports the fault tolerance parameter compatible_illegal_chars for exporting foreign tables.
  • The window function last_value supports the ignore nulls feature.

-

-

-

Resolved issues

The SQL statement's execution is unstable and slow, with pgxc_thread_wait_status showing HashJoin - nestloop for extended periods.

Each partition group has about 10,000 rows, causing prolonged nestloop execution due to data variations.

8.1.3.300

Upgrade to 8.2.1.220.

The database's large number of objects leads to slow performance and high memory usage during queries.

This is mainly due to numerous tables with the internal_mask option in column storage mode, causing inefficient permission verification.

Versions earlier than 8.2.1.119

Excessive expressions during LLVM compilation result in high CPU usage.

Enabling LLVM and having a large number of expressions can result in prolonged execution times. Disabling LLVM reduces execution time from several hours to just over 10 minutes when expressions exceed 1,000.

8.1.3.320

The cursor fetches 2,000 records each time, with memory usage exceeding estimates by 24 MB per fetch. As a result, when the total number of data records reaches 20,000,000, the query execution fails.

In PBE scenarios, the previously generated plan is reused, and the estimated memory increases by a fixed value each time. This can lead to memory overestimation and CCN queuing.

8.1.3.323

Memory leakage during JSON-type queries causes high memory usage.

This happens when there is unreleased memory in the jsonb out function.

8.1.3.x

Executing SELECT * FROM WITH clause in customer service SQL statements causes a CN core dump.

ProjectionPushdown updates the rte but fails to update the var based on the new rte, leading to a core dump during quals processing.

8.1.3.323

Overflow occurs when the number of WHERE conditions in a DELETE statement exceeds the upper limit.

The number of WHERE conditions in the DELETE statement exceeds the int16 limit of 32,767. This causes an overflow and results in a core dump.

8.1.2.x

During scale-out, the redistribution process restarts, causing a suspension of over an hour when generating a table list.

To generate a table list, the system catalog is queried on the CN and INSERT INTO needs to be executed for each record inserted into the distributed table pgxc_redistb. However, if there are numerous tables, using the VALUES statement can be time-consuming.

8.1.3.110

CN memory leakage occurs during transaction rollback caused by primary key conflicts.

  1. In JDBC, the PBE protocol inserts data through CN lightweight, but primary key conflicts cause errors.
  2. A transaction with multiple cross-CN lightweight queries or unnamed statements sent by JDBC saves the global LightProxy object to the portal before execution. If not released after the transaction, it causes memory accumulation.
  3. Numerous CachedPlanQuery memory contexts appear in the pv_session_memory_detail view of the CN.

8.2.0.103

Actual memory usage exceeds estimates in COUNT DISTINCT and UNION ALL scenarios.

In multi-branch serial execution, only the memory when the lower-layer operator returns data is considered, not during its actual execution, leading to underestimated memory usage.

8.1.3.321

After a cluster restart, only the first session connection can use SQL debugging, with subsequent connections failing. Breakpoints do not work when debugging SQL statements in DataStudio.

The scheduling entry variable is released after database disconnection, resulting in a null pointer and failed debugging logic.

  • 8.1.1.x
  • 8.1.3.x

The WITH RECURSIVE statement runs indefinitely in Arm environments.

Abnormal thread information synchronization in the Arm environment can result in variables not being updated synchronously.

8.1.3.323

Executing INSERT OVERWRITE for a specific partition overwrites the entire table.

In PBE logic, INSERT OVERWRITE does not copy partition information, causing FILENODE exchange on the entire table.

Versions earlier than 8.2.1.220

The subquery result set containing WindowAgg is abnormal.

The subquery result set containing WindowAgg is abnormal. WindowAgg is not considered when generating a bloom filter. If the join association column is not a WindowAgg grouping column, grouping data is reduced, affecting the window function's grouping result.

8.1.3.x

Memory insufficiency errors occur and the view shows that the SQL statements with high memory usage are VACUUM FULL operations.

Performing VACUUM FULL on every partition of a partitioned table prevents memory from being released, causing memory usage to keep growing until an error is triggered.

8.1.3.x

Restarting the logical cluster times out.

This happens because of the default use of 10 IP addresses by the CM, requiring dynamic adaptation.

8.2.1.200

After a version update, numerous "Wait poll time out" errors occur.

The LibcommCheckWaitPoll function behaves unexpectedly when passed -1.

8.2.1.200

8.2.1.119

Table 5 New features/Resolved issues in version 8.2.1.119

Type

Feature or Resolved Issue

Cause

Version

Handling Method

New features

The last_value window function now supports ignore nulls and is compatible with Redshift.

-

-

-

Resolved issues

Error when using try_cast function on column-store tables.

The try_cast function is incompatible with the vectorized executor, causing errors during execution.

8.2.1.100 and earlier versions

Upgrade the version to 8.2.1.119.

Slow insert operations post-cluster restart.

Indexes are required to scan full data during insert operations after a restart, impacting performance.

8.2.1.100 and earlier versions

CCN count abnormalities not triggering calibration, leading to queuing issues.

A bug in code processing prevents the calibration mechanism from activating when CCN counts are abnormal.

8.2.1.100 and earlier versions

Primary key conflict error and CN memory leak when inserting data via JDBC using PBE protocol.

The CN lightweight process does not release the lightweight object post-transaction, leading to memory accumulation.

8.2.1.100 and earlier versions

Incorrect plan generation when enable_stream_ctescan GUC hint is set.

The rollback to a non-ShareScan plan is incomplete when CTE memory usage estimates exceed thresholds, resulting in execution failure.

8.2.1.100 and earlier versions

Metadata restoration failure when backup size exceeds 64 MB on OBS.

Code vulnerabilities discard the last buffer segment during segmented download, corrupting the metadata.

8.2.1.100 and earlier versions

High memory usage during hstore delta table ANALYZE sampling.

The process of delta table combining I records consumes excessive memory. Toast data and delta data deserialization space must be released promptly.

8.2.1.100 and earlier versions

Inability to push down volatile functions in single-reference CTE queries.

Version 821 adds constraints against pushing down volatile functions in CTEs, which should be removed for single-reference scenarios to enable pushdown.

8.2.1.100 and earlier versions

Excessive space occupation by temporary files in XFS system, causing read-only cluster state.

Each temporary file spilled to disks occupies 16 MB in XFS. Too many spilled files cause cluster read-only state, necessitating a reduction in disk space usage by these files.

8.2.1.100 and earlier versions