Help Center/ DataArts Studio/ FAQs/ DataArts Migration (Real-Time Jobs)/ What Should I Do If Oracle Failed to Parse Logs When Reading Full or Incremental Data and Error Message "DmlParserException: DML statement couldn't be parsed" Is Displayed?
Updated on 2025-11-17 GMT+08:00

What Should I Do If Oracle Failed to Parse Logs When Reading Full or Incremental Data and Error Message "DmlParserException: DML statement couldn't be parsed" Is Displayed?

Symptom

During full or incremental synchronization, the source Oracle link fails to parse logs, and the JobManager or TaskManager log contains error message "DmlParserException: DML statement couldn't be parsed".

org.apache.inlong.sort.cdc.oracle.shaded.io.debezium.connector.oracle.logminer.parser.DmlParserException: DML statement couldn't be parsed. Please open a Jira issue with the statement 'update "UNKNOWN"."OBJ# 125509" set "COL 5" = HEXTORAW('64656661756c745fe6b299e58f91') where "COL 1" = HEXTORAW('c102') and "COL 2" = HEXTORAW('7364616166') and "COL 3" = HEXTORAW('61736466') and "COL 4" = HEXTORAW('64656661756c745fe6b299e58f91') and "COL 5" IS NULL;'.    at org.apache.inlong.sort.cdc.oracle.shaded.io.debezium.connector.oracle.logminer.processor.AbstractLogMinerEventProcessor.parseDmlStatement(AbstractLogMinerEventProcessor.java:1156) ~[blob_p-ca75cd32c9bb35af88377329bcc17d71710113cd-6f5aa5c9e20f0f793ca9f22e7f8d1fa5:?]    at org.apache.inlong.sort.cdc.oracle.shaded.io.debezium.connector.oracle.logminer.processor.AbstractLogMinerEventProcessor.lambda$handleDataEvent$5(AbstractLogMinerEventProcessor.java:928) ~[blob_p-ca75cd32c9bb35af88377329bcc17d71710113cd-6f5aa5c9e20f0f793ca9f22e7f8d1fa5:?]    at org.apache.inlong.sort.cdc.oracle.shaded.io.debezium.connector.oracle.logminer.processor.memory.MemoryLogMinerEventProcessor.addToTransaction(MemoryLogMinerEventProcessor.java:287) ~[blob_p-ca75cd32c9bb35af88377329bcc17d71710113cd-6f5aa5c9e20f0f793ca9f22e7f8d1fa5:?]    at org.apache.inlong.sort.cdc.oracle.shaded.io.debezium.connector.oracle.logminer.processor.AbstractLogMinerEventProcessor.handleDataEvent(AbstractLogMinerEventProcessor.java:923) ~[blob_p-ca75cd32c9bb35af88377329bcc17d71710113cd-6f5aa5c9e20f0f793ca9f22e7f8d1fa5:?]    at org.apache.inlong.sort.cdc.oracle.shaded.io.debezium.connector.oracle.logminer.processor.AbstractLogMinerEventProcessor.processRow(AbstractLogMinerEventProcessor.java:386) ~[blob_p-ca75cd32c9bb35af88377329bcc17d71710113cd-6f5aa5c9e20f0f793ca9f22e7f8d1fa5:?]    at org.apache.inlong.sort.cdc.oracle.shaded.io.debezium.connector.oracle.logminer.processor.AbstractLogMinerEventProcessor.processResults(AbstractLogMinerEventProcessor.java:289) ~[blob_p-ca75cd32c9bb35af88377329bcc17d71710113cd-6f5aa5c9e20f0f793ca9f22e7f8d1fa5:?]    at org.apache.inlong.sort.cdc.oracle.shaded.io.debezium.connector.oracle.logminer.processor.AbstractLogMinerEventProcessor.process(AbstractLogMinerEventProcessor.java:211) ~[blob_p-ca75cd32c9bb35af88377329bcc17d71710113cd-6f5aa5c9e20f0f793ca9f22e7f8d1fa5:?]    at org.apache.inlong.sort.cdc.oracle.shaded.io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource.execute(LogMinerStreamingChangeEventSource.java:308) ~[blob_p-ca75cd32c9bb35af88377329bcc17d71710113cd-6f5aa5c9e20f0f793ca9f22e7f8d1fa5:?]    at org.apache.inlong.sort.cdc.oracle.source.reader.fetch.OracleStreamFetchTask$RedoLogSplitReadTask.execute(OracleStreamFetchTask.java:136) ~[blob_p-ca75cd32c9bb35af88377329bcc17d71710113cd-6f5aa5c9e20f0f793ca9f22e7f8d1fa5:?]    at org.apache.inlong.sort.cdc.oracle.source.reader.fetch.OracleStreamFetchTask.execute(OracleStreamFetchTask.java:74) ~[blob_p-ca75cd32c9bb35af88377329bcc17d71710113cd-6f5aa5c9e20f0f793ca9f22e7f8d1fa5:?]    at org.apache.inlong.sort.cdc.oracle.shaded.org.apache.inlong.sort.cdc.base.source.reader.external.IncrementalSourceStreamFetcher.lambda$submitTask$0(IncrementalSourceStreamFetcher.java:92) ~[blob_p-ca75cd32c9bb35af88377329bcc17d71710113cd-6f5aa5c9e20f0f793ca9f22e7f8d1fa5:?]    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_362]    at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_362]    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_362]    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_362]    at java.lang.Thread.run(Thread.java:750) [?:1.8.0_362]Caused by: org.apache.inlong.sort.cdc.oracle.shaded.io.debezium.connector.oracle.logminer.parser.DmlParserException: Index: 4, Size: 4table :ORA11G.MG.TABLE_F    at org.apache.inlong.sort.cdc.oracle.shaded.io.debezium.connector.oracle.logminer.parser.adg.LogminerAdgDmlParser.parse(LogminerAdgDmlParser.java:56) ~[blob_p-ca75cd32c9bb35af88377329bcc17d71710113cd-6f5aa5c9e20f0f793ca9f22e7f8d1fa5:?]    at org.apache.inlong.sort.cdc.oracle.shaded.io.debezium.connector.oracle.logminer.processor.AbstractLogMinerEventProcessor.parseDmlStatement(AbstractLogMinerEventProcessor.java:1148) ~[blob_p-ca75cd32c9bb35af88377329bcc17d71710113cd-6f5aa5c9e20f0f793ca9f22e7f8d1fa5:?]    ... 15 more

Possible Causes

During log parsing, the metadata of the Oracle table needs to be read to parse column data. If the source Oracle table has undergone a DDL change during the full data migration or from the incremental migration position to the start of the incremental migration job, the column data in the historical log may not match the obtained metadata of the Oracle table. As a result, the parsing fails. As the database does not provide historical metadata, the metadata before the DDL change cannot be obtained for parsing logs.

Solution

  • Check whether the source Oracle table has undergone a change during the full data migration or from the incremental migration position to the start of the incremental migration job. If there is a DDL change, migrate the entire table and its incremental data again to ensure data consistency.
  • Avoid a DDL change to the source Oracle table during the full data migration or from the incremental migration position to the start of the incremental migration job.