文档首页/ 数据治理中心 DataArts Studio/ 常见问题/ 数据集成(实时作业)/ Oracle 在读取全量或增量数据时遇到日志解析失败,错误信息包含“DmlParserException: DML statement couldn't be parsed”,应如何处理?
更新时间:2025-08-05 GMT+08:00

Oracle 在读取全量或增量数据时遇到日志解析失败,错误信息包含“DmlParserException: DML statement couldn't be parsed”,应如何处理?

问题描述

Oracle 作为源端的链路在全量或增量同步过程中解析日志出现异常,JobManager 或 TaskManager 日志报错“DmlParserException: DML statement couldn't be parsed”。

org.apache.inlong.sort.cdc.oracle.shaded.io.debezium.connector.oracle.logminer.parser.DmlParserException: DML statement couldn't be parsed. Please open a Jira issue with the statement 'update "UNKNOWN"."OBJ# 125509" set "COL 5" = HEXTORAW('64656661756c745fe6b299e58f91') where "COL 1" = HEXTORAW('c102') and "COL 2" = HEXTORAW('7364616166') and "COL 3" = HEXTORAW('61736466') and "COL 4" = HEXTORAW('64656661756c745fe6b299e58f91') and "COL 5" IS NULL;'.
    at org.apache.inlong.sort.cdc.oracle.shaded.io.debezium.connector.oracle.logminer.processor.AbstractLogMinerEventProcessor.parseDmlStatement(AbstractLogMinerEventProcessor.java:1156) ~[blob_p-ca75cd32c9bb35af88377329bcc17d71710113cd-6f5aa5c9e20f0f793ca9f22e7f8d1fa5:?]
    at org.apache.inlong.sort.cdc.oracle.shaded.io.debezium.connector.oracle.logminer.processor.AbstractLogMinerEventProcessor.lambda$handleDataEvent$5(AbstractLogMinerEventProcessor.java:928) ~[blob_p-ca75cd32c9bb35af88377329bcc17d71710113cd-6f5aa5c9e20f0f793ca9f22e7f8d1fa5:?]
    at org.apache.inlong.sort.cdc.oracle.shaded.io.debezium.connector.oracle.logminer.processor.memory.MemoryLogMinerEventProcessor.addToTransaction(MemoryLogMinerEventProcessor.java:287) ~[blob_p-ca75cd32c9bb35af88377329bcc17d71710113cd-6f5aa5c9e20f0f793ca9f22e7f8d1fa5:?]
    at org.apache.inlong.sort.cdc.oracle.shaded.io.debezium.connector.oracle.logminer.processor.AbstractLogMinerEventProcessor.handleDataEvent(AbstractLogMinerEventProcessor.java:923) ~[blob_p-ca75cd32c9bb35af88377329bcc17d71710113cd-6f5aa5c9e20f0f793ca9f22e7f8d1fa5:?]
    at org.apache.inlong.sort.cdc.oracle.shaded.io.debezium.connector.oracle.logminer.processor.AbstractLogMinerEventProcessor.processRow(AbstractLogMinerEventProcessor.java:386) ~[blob_p-ca75cd32c9bb35af88377329bcc17d71710113cd-6f5aa5c9e20f0f793ca9f22e7f8d1fa5:?]
    at org.apache.inlong.sort.cdc.oracle.shaded.io.debezium.connector.oracle.logminer.processor.AbstractLogMinerEventProcessor.processResults(AbstractLogMinerEventProcessor.java:289) ~[blob_p-ca75cd32c9bb35af88377329bcc17d71710113cd-6f5aa5c9e20f0f793ca9f22e7f8d1fa5:?]
    at org.apache.inlong.sort.cdc.oracle.shaded.io.debezium.connector.oracle.logminer.processor.AbstractLogMinerEventProcessor.process(AbstractLogMinerEventProcessor.java:211) ~[blob_p-ca75cd32c9bb35af88377329bcc17d71710113cd-6f5aa5c9e20f0f793ca9f22e7f8d1fa5:?]
    at org.apache.inlong.sort.cdc.oracle.shaded.io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource.execute(LogMinerStreamingChangeEventSource.java:308) ~[blob_p-ca75cd32c9bb35af88377329bcc17d71710113cd-6f5aa5c9e20f0f793ca9f22e7f8d1fa5:?]
    at org.apache.inlong.sort.cdc.oracle.source.reader.fetch.OracleStreamFetchTask$RedoLogSplitReadTask.execute(OracleStreamFetchTask.java:136) ~[blob_p-ca75cd32c9bb35af88377329bcc17d71710113cd-6f5aa5c9e20f0f793ca9f22e7f8d1fa5:?]
    at org.apache.inlong.sort.cdc.oracle.source.reader.fetch.OracleStreamFetchTask.execute(OracleStreamFetchTask.java:74) ~[blob_p-ca75cd32c9bb35af88377329bcc17d71710113cd-6f5aa5c9e20f0f793ca9f22e7f8d1fa5:?]
    at org.apache.inlong.sort.cdc.oracle.shaded.org.apache.inlong.sort.cdc.base.source.reader.external.IncrementalSourceStreamFetcher.lambda$submitTask$0(IncrementalSourceStreamFetcher.java:92) ~[blob_p-ca75cd32c9bb35af88377329bcc17d71710113cd-6f5aa5c9e20f0f793ca9f22e7f8d1fa5:?]
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_362]
    at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_362]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_362]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_362]
    at java.lang.Thread.run(Thread.java:750) [?:1.8.0_362]
Caused by: org.apache.inlong.sort.cdc.oracle.shaded.io.debezium.connector.oracle.logminer.parser.DmlParserException: Index: 4, Size: 4table :ORA11G.MG.TABLE_F
    at org.apache.inlong.sort.cdc.oracle.shaded.io.debezium.connector.oracle.logminer.parser.adg.LogminerAdgDmlParser.parse(LogminerAdgDmlParser.java:56) ~[blob_p-ca75cd32c9bb35af88377329bcc17d71710113cd-6f5aa5c9e20f0f793ca9f22e7f8d1fa5:?]
    at org.apache.inlong.sort.cdc.oracle.shaded.io.debezium.connector.oracle.logminer.processor.AbstractLogMinerEventProcessor.parseDmlStatement(AbstractLogMinerEventProcessor.java:1148) ~[blob_p-ca75cd32c9bb35af88377329bcc17d71710113cd-6f5aa5c9e20f0f793ca9f22e7f8d1fa5:?]
    ... 15 more

原因分析

在日志解析过程中,需要读取 Oracle 表的元数据信息对解析列数据进行解析。如果源端 Oracle 表在全量数据迁移或从增量迁移位点到增量作业启动期间发生过 DDL 变更,可能会导致历史日志列数据信息与获取到的 Oracle 表元数据信息不匹配,从而导致解析失败。由于数据库不提供历史元数据信息,因此无法获取 DDL 变更前的元数据信息以进行日志解析。

解决方案

  • 在全量数据迁移过程或从增量迁移位点到增量作业启动期间,检查源端为Oracle的同步表是否发生过变更。如果确认期间有DDL变更,为了确保数据一致性,建议对该表执行全量加+增量的重新迁移。
  • 建议在全量数据迁移过程或从增量迁移位点到增量作业启动期间,避免对源端为Oracle的同步表执行DDL变更操作。