New Features and Resolved Issues in 8.2.1.x
8.2.1.225
Category |
Feature or Resolved Issue |
Cause |
Version |
Handling Method |
---|---|---|---|---|
New feature |
None |
- |
- |
- |
Resolved issue |
Replacement of GDS invalid characters fails. |
When invalid GDS characters are replaced with special characters (�), an exception occurs. This is because the replacement changes the string length, but the original length is still used in subsequent processing. As a result, some characters are truncated and cannot be replaced correctly. |
Versions earlier than 8.2.1.225 |
Upgrade to 8.2.1.225. |
During concurrent pressure tests, gather performance occasionally deteriorates. |
If a statement includes the stream operator, multiple stream threads are generated on DNs. The topConsumer thread, which integrates substream thread data and sends it to the CN, can only clear the stream thread group after all substream threads exit. |
Versions earlier than 8.2.1.225 |
8.2.1.223
Type |
Feature or Resolved Issue |
Cause |
Version |
Handling Method |
---|---|---|---|---|
New feature |
None |
- |
- |
- |
Resolved issue |
Cluster hang detection triggers a switchover. |
Before signal reconstruction, the unreliable SIGUSR2 was used for IPC. After reconstruction, reliable signals 34 and 35 are used. However, sending too many signals increases the likelihood of timer creation failures. |
8.2.1.220 |
Upgrade to 8.2.1.223. |
A core dump (GsCgroupIsClass) occurs when pgxc_cgroup_reload_conf messages are sent concurrently. |
This happens because of unlocked pointer access. When the reload function modifies the pointer, it results in a wild pointer access and a core dump. |
8.2.1.220 |
||
The table size reported by the gs_table_distribution function differs significantly from the actual size. |
This happens when data in the pg_refilenode_size system catalog is read in batches and calculated, causing repeated accumulation of the current batch's table size. |
8.2.1.220 |
||
Executing an SQL statement may result in the error "Could not open file 'pg_clog/000000000075'". |
After VACUUM FULL on a column-store table, clogs may be prematurely reclaimed, making them inaccessible during ANALYZE after an active/standby switchover. |
8.2.1.119 |
||
The issue of freememory showing a large negative value due to uninitialized temporary variables has been resolved. |
Declaring a temporary variable without assigning a value led to unexpected parameter values and excessively negative memory usage, causing test case failures when the network adapter was faulty. |
8.2.1.220 |
||
After configuring VACUUM FULL for intelligent O&M, the actual execution time can exceed the configured range. |
When the scheduler kills the VACUUM FULL task, a new task is inserted, preventing complete execution of the kill task. |
8.1.3.x |
8.2.1.220
Type |
Feature or Resolved Issue |
Cause |
Version |
Handling Method |
---|---|---|---|---|
New feature |
|
- |
- |
- |
Resolved issue |
The SQL statement's execution is unstable and slow, with pgxc_thread_wait_status showing HashJoin - nestloop for extended periods. |
Each partition group has about 10,000 rows, causing prolonged nestloop execution due to data variations. |
8.1.3.300 |
Upgrade to 8.2.1.220. |
The database's large number of objects leads to slow performance and high memory usage during queries. |
This is mainly due to numerous tables with the internal_mask option in column storage mode, causing inefficient permission verification. |
Versions earlier than 8.2.1.119 |
||
Excessive expressions during LLVM compilation result in high CPU usage. |
Enabling LLVM and having a large number of expressions can result in prolonged execution times. Disabling LLVM reduces execution time from several hours to just over 10 minutes when expressions exceed 1,000. |
8.1.3.320 |
||
The cursor fetches 2,000 records each time, with memory usage exceeding estimates by 24 MB per fetch. As a result, when the total number of data records reaches 20,000,000, the query execution fails. |
In PBE scenarios, the previously generated plan is reused, and the estimated memory increases by a fixed value each time. This can lead to memory overestimation and CCN queuing. |
8.1.3.323 |
||
Memory leakage during JSON-type queries causes high memory usage. |
This happens when there is unreleased memory in the jsonb out function. |
8.1.3.x |
||
Executing SELECT * FROM WITH clause in customer service SQL statements causes a CN core dump. |
ProjectionPushdown updates the rte but fails to update the var based on the new rte, leading to a core dump during quals processing. |
8.1.3.323 |
||
Overflow occurs when the number of WHERE conditions in a DELETE statement exceeds the upper limit. |
The number of WHERE conditions in the DELETE statement exceeds the int16 limit of 32,767. This causes an overflow and results in a core dump. |
8.1.2.x |
||
During scale-out, the redistribution process restarts, causing a suspension of over an hour when generating a table list. |
To generate a table list, the system catalog is queried on the CN and INSERT INTO needs to be executed for each record inserted into the distributed table pgxc_redistb. However, if there are numerous tables, using the VALUES statement can be time-consuming. |
8.1.3.110 |
||
CN memory leakage occurs during transaction rollback caused by primary key conflicts. |
|
8.2.0.103 |
||
Actual memory usage exceeds estimates in COUNT DISTINCT and UNION ALL scenarios. |
In multi-branch serial execution, only the memory when the lower-layer operator returns data is considered, not during its actual execution, leading to underestimated memory usage. |
8.1.3.321 |
||
After a cluster restart, only the first session connection can use SQL debugging, with subsequent connections failing. Breakpoints do not work when debugging SQL statements in DataStudio. |
The scheduling entry variable is released after database disconnection, resulting in a null pointer and failed debugging logic. |
|
||
The WITH RECURSIVE statement runs indefinitely in Arm environments. |
Abnormal thread information synchronization in the Arm environment can result in variables not being updated synchronously. |
8.1.3.323 |
||
Executing INSERT OVERWRITE for a specific partition overwrites the entire table. |
In PBE logic, INSERT OVERWRITE does not copy partition information, causing FILENODE exchange on the entire table. |
Versions earlier than 8.2.1.220 |
||
The subquery result set containing WindowAgg is abnormal. |
The subquery result set containing WindowAgg is abnormal. WindowAgg is not considered when generating a bloom filter. If the join association column is not a WindowAgg grouping column, grouping data is reduced, affecting the window function's grouping result. |
8.1.3.x |
||
Memory insufficiency errors occur and the view shows that the SQL statements with high memory usage are VACUUM FULL operations. |
Performing VACUUM FULL on every partition of a partitioned table prevents memory from being released, causing memory usage to keep growing until an error is triggered. |
8.1.3.x |
||
Restarting the logical cluster times out. |
This happens because of the default use of 10 IP addresses by the CM, requiring dynamic adaptation. |
8.2.1.200 |
||
After a version update, numerous "Wait poll time out" errors occur. |
The LibcommCheckWaitPoll function behaves unexpectedly when passed -1. |
8.2.1.200 |
8.2.1.119
Type |
Feature or Resolved Issue |
Cause |
Version |
Handling Method |
---|---|---|---|---|
New feature |
The last_value window function now supports ignore nulls and is compatible with Redshift. |
- |
- |
- |
Resolved issue |
Error when using try_cast function on column-store tables. |
The try_cast function is incompatible with the vectorized executor, causing errors during execution. |
8.2.1.100 and earlier versions |
Upgrade the version to 8.2.1.119. |
Slow insert operations post-cluster restart. |
Indexes are required to scan full data during insert operations after a restart, impacting performance. |
8.2.1.100 and earlier versions |
||
CCN count abnormalities not triggering calibration, leading to queuing issues. |
A bug in code processing prevents the calibration mechanism from activating when CCN counts are abnormal. |
8.2.1.100 and earlier versions |
||
Primary key conflict error and CN memory leak when inserting data via JDBC using PBE protocol. |
The CN lightweight process does not release the lightweight object post-transaction, leading to memory accumulation. |
8.2.1.100 and earlier versions |
||
Incorrect plan generation when enable_stream_ctescan GUC hint is set. |
The rollback to a non-ShareScan plan is incomplete when CTE memory usage estimates exceed thresholds, resulting in execution failure. |
8.2.1.100 and earlier versions |
||
Metadata restoration failure when backup size exceeds 64 MB on OBS. |
Code vulnerabilities discard the last buffer segment during segmented download, corrupting the metadata. |
8.2.1.100 and earlier versions |
||
High memory usage during hstore delta table ANALYZE sampling. |
The process of delta table combining I records consumes excessive memory. Toast data and delta data deserialization space must be released promptly. |
8.2.1.100 and earlier versions |
||
Inability to push down volatile functions in single-reference CTE queries. |
Version 821 adds constraints against pushing down volatile functions in CTEs, which should be removed for single-reference scenarios to enable pushdown. |
8.2.1.100 and earlier versions |
||
Excessive space occupation by temporary files in XFS system, causing read-only cluster state. |
Each temporary file spilled to disks occupies 16 MB in XFS. Too many spilled files cause cluster read-only state, necessitating a reduction in disk space usage by these files. |
8.2.1.100 and earlier versions |
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot