Help Center/ GaussDB(DWS)/ Tool Guide/ DWS-Connector/ DWS-Connector Version Description
Updated on 2024-07-19 GMT+08:00

DWS-Connector Version Description

Table 1 Change History

Version

Change Description

Remarks

1.0

This issue is the first official release.

dws-connector-flink only releases Scala2.11 Flink 1.12.

1.0.2

Optimized the exception retry logic of dwsclient. The retry mode is changed from retry upon all exceptions to retry only upon five types of exceptions: connection exception, database read-only, timeout, excessive connections, and lock exception.

Supported dws-connector-flink versions:

Scala2.11: flink 1.12, 1.13

Scala2.12: flink 1.12, 1.13, 1.15

1.0.3

  1. Resolved known issues and optimized performance.
  2. Supports the update write mode.
  3. Supports unique indexes.
  4. As the update mode is supported, to avoid ambiguity of the upsert interface in dwsClient, the write interface is used for write operations. The write interface is recommended for both the two type of writes.

-

1.0.4

Increased the SQL execution timeout interval to avoid long-time blocking.

-

1.0.5

Resolved the problem that duplicate data is lost when being written to a table without a primary key.

-

1.0.6

  1. Optimized the cache saving logic to improve the throughput when the CPU is insufficient.
  2. Temporary tables are reused to prevent frequent creation of temporary tables in COPY MERGE/UPSERT scenarios.
  3. The CSV format is added for COPY to avoid that complex data cannot be imported to the database due to special characters.

-

1.0.7

  1. Retry is supported after data fails to be written during database restart.
  2. The AS mode is added to create temporary tables to solve the problem that COPY MERGE/UPSERT cannot be used in tables with primary keys.
  3. The database fields are case-insensitive by default.
  4. The primary key printing parameter is added to Flink SQL statements to locate problems when data is missing.

-

1.0.8

  1. Fixed the problem that the case of the Flink SQL primary key must be the same as that in the database.
  2. Added the parameter for setting sink concurrency.

-

1.0.9

Optimized the import of data of the time type.

-

1.0.10

  1. Resolved the issue of data loss caused by concurrent delete and insert operations on the client. This could happen when the insert operation ran before the delete operation, and the same primary key was deleted and then inserted in the same cache batch.
  2. Resolved the issue of occasional data loss when Kafka writes data to GaussDB(DWS).
  3. The connector parameter ignoreUpdateBefore is added. Some main parameters are compatible with flink-connector-jdbc.

-