Help Center/ GaussDB(DWS)/ Tool Guide/ DWS-Connector/ DWS-Connector Version Description
Updated on 2025-04-30 GMT+08:00

DWS-Connector Version Description

The latest release of the DWS-Connector includes three main components: dws-client, dws-connector-flink, and dws-flink-ingestion. The dws-client component is responsible for importing data into the database and is identified by a version number.

There are two versions available: 1.x and 2.x. The 1.x version focuses on enhancing existing functions without introducing new ones, while the 2.x version represents a significant evolution with redesigned features. Some functions in the 2.x version may not be compatible with the 1.x version due to a complete design overhaul.

As a result, the 1.x version will receive maintenance and bug fixes in the short term. Once the 2.x version is stable and widely adopted, further development of the 1.x version will cease.

Table 1 Change History

Version

Change Description

Remarks

1.0

This issue is the first official release.

dws-connector-flink only releases Scala2.11 Flink 1.12.

1.0.2

Optimized the exception retry logic of dwsclient. The retry mode is changed from retry upon all exceptions to retry only upon five types of exceptions: connection exception, database read-only, timeout, excessive connections, and lock exception.

Compatible versions of dws-connector-flink:

Scala2.11: Flink 1.12 and 1.13

Scala2.12: Flink 1.12, 1.13, and 1.15

1.0.3

  1. Resolved known issues and optimized performance.
  2. Supports the update write mode.
  3. Supports unique indexes.
  4. As the update mode is supported, to avoid ambiguity of the upsert interface in dwsClient, the write interface is used for write operations. The write interface is recommended for both the two type of writes.

-

1.0.4

Increased the SQL execution timeout interval to avoid long-time blocking.

-

1.0.5

Resolved the problem that duplicate data is lost when being written to a table without a primary key.

-

1.0.6

  1. Optimized the cache saving logic to improve the throughput when the CPU is insufficient.
  2. Temporary tables are reused to prevent frequent creation of temporary tables in COPY MERGE/UPSERT scenarios.
  3. The CSV format is added for COPY to avoid that complex data cannot be imported to the database due to special characters.

-

1.0.7

  1. Retry is supported after data fails to be written during database restart.
  2. The AS mode is added to create temporary tables to solve the problem that COPY MERGE/UPSERT cannot be used in tables with primary keys.
  3. The database fields are case-insensitive by default.
  4. The primary key printing parameter is added to Flink SQL statements to locate problems when data is missing.

-

1.0.8

  1. Fixed the problem that the case of the Flink SQL primary key must be the same as that in the database.
  2. Added the parameter for setting sink concurrency.

-

1.0.9

Optimized the import of data of the time type.

-

1.0.10

  1. Resolved the issue of data loss caused by concurrent delete and insert operations on the client. This could happen when the insert operation ran before the delete operation, and the same primary key was deleted and then inserted in the same cache batch.
  2. Resolved the issue of occasional data loss when Kafka writes data to GaussDB(DWS).
  3. The connector parameter ignoreUpdateBefore is added. Some main parameters are compatible with flink-connector-jdbc.

-

1.0.11

  1. The GaussDB(DWS) client write API validates database fields against the input schema, automatically adding any missing fields.
  2. Configuration options are available for comparison fields, and new data is only updated if it exceeds the existing database value.
  3. Flink SQL also enables logical deletion.

Scala 2.12 is added to connector-flink: Flink 1.17

1.1.0

Optimized the cache write performance of the GaussDB(DWS) client.

-

1.1.0.1

  1. The client adjusts to the bit type and saves it to the database.
  2. The client sets the nvarchar value to null and saves it to the database.

-

1.1.0.2

Fixed issues:

  1. Fixed the issue where an error occurred when a table column was in uppercase.
  2. Resolved the problem with upsert causing errors during data import to a database with a bigint type, specifically when the first data record was of int type and subsequent records were of long type exceeding the int range.

-

1.1.0.3

Fixed issues:

  1. Fixed the error when reading binlog with capitalized table names.
  2. Resolved the issue of errors occurring when the source data contained \u0000 in copy mode.

-

2.0.0-r0~r2

In the 2.x version initialization, GaussDB(DWS) made the following enhancements:

  1. Enabled direct import of connected DNs to the database.
  2. Revamped the cache model by introducing partition-level cache at the table level, allowing multiple caches per table.
  3. Implemented support for the properties configuration file mode during client initialization.
  4. Ensured that operations like upsert and merge can only proceed after the previous data batch is fully imported to maintain the sequence of data import to the database consistent with writing data to the cache.
  5. Implemented data type conversion migration to the write cache, enabling both copy and upsert functionalities.

-

2.0.0-r3

Fixed issues:

  1. Fixed the problem where the table name read by Flink SQL binlog did not exist, leading to the exhaustion of GaussDB(DWS) cluster connections.
  2. Addressed the issue where table columns and schemas in uppercase could not be read during binlog reading.
  3. Fixed the problem where a single data record could be lost in case of a GaussDB(DWS) fault during the import of Flink SQL statements to the database.

Improved features:

  1. Introduced import delay and import speed indicators for Flink SQL statements and APIs.
  2. Implemented automatic generation of a unique index when no primary key is present but a unique index exists.
  3. Added a label for JDBC connections.
  4. Enhanced the logic for converting time type fields.
  5. Updated to utilize the Map interface within the context of the DwsInvokeFunction interface.

-