文档首页/ 数据治理中心 DataArts Studio/ 常见问题/ 数据集成(实时作业)/ Hudi作为目的端时,作业启动失败且报错信息包含“Table XXX is partitioned by field, which is conflict with input options”怎么办?
更新时间:2025-11-03 GMT+08:00
分享

Hudi作为目的端时,作业启动失败且报错信息包含“Table XXX is partitioned by field, which is conflict with input options”怎么办?

问题描述

用户启动作业失败,报错信息包含“Table XXX is partitioned by field , which is conflict with input options.”。

报错信息详情:

Caused by: com.huawei.clouds.dataarts.shaded.org.apache.hudi.exception.
HoodieException: Table rds_source_tbl_961_0726 is partitioned by field , which is conflict with input options.
at com.huawei.clouds.dataarts.migration.connector.hudi.table.HoodieTableFactory.lambda$reconcileHoodieOptions$12(HoodieTableFactory.java:753) ~[?:?]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_362]
at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_362]
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1890) ~[flink-dist-1.15.0-h0.cbu.dli.321.r2.jar:1.15.0-h0.cbu.dli.321.r2]
at com.huawei.clouds.dataarts.migration.connector.hudi.table.HoodieTableFactory.reconcileHoodieOptions(HoodieTableFactory.java:666) ~[?:?]
at com.huawei.clouds.dataarts.migration.connector.hudi.table.HoodieTableFactory.setupHoodieTableOptions(HoodieTableFactory.java:623) ~[?:?]
at com.huawei.clouds.dataarts.migration.connector.hudi.table.HoodieTableFactory.createDynamicTableSink(HoodieTableFactory.java:192) ~[?:?]
at org.apache.flink.table.factories.FactoryUtil.createDynamicTableSink(FactoryUtil.java:304) ~[flink-table-api-java-uber-1.15.0-h0.cbu.dli.321.r2.jar:1.15.0-h0.cbu.dli.321.r2]
at org.apache.flink.table.planner.delegation.PlannerBase.getTableSink(PlannerBase.scala:434) ~[flink-table-planner_2.12-1.15.0-h0.cbu.dli.321.r2.jar:1.15.0-h0.cbu.dli.321.r2]
at org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:233) ~[flink-table-planner_2.12-1.15.0-h0.cbu.dli.321.r2.jar:1.15.0-h0.cbu.dli.321.r2]
at org.apache.flink.table.planner.delegation.PlannerBase.$anonfun$translate$1(PlannerBase.scala:182) ~[flink-table-planner_2.12-1.15.0-h0.cbu.dli.321.r2.jar:1.15.0-h0.cbu.dli.321.r2]
at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233) ~[flink-scala_2.12-1.15.0-h0.cbu.dli.321.r2.jar:1.15.0-h0.cbu.dli.321.r2]
at scala.collection.Iterator.foreach(Iterator.scala:937) ~[flink-scala_2.12-1.15.0-h0.cbu.dli.321.r2.jar:1.15.0-h0.cbu.dli.321.r2]
at scala.collection.Iterator.foreach$(Iterator.scala:937) ~[flink-scala_2.12-1.15.0-h0.cbu.dli.321.r2.jar:1.15.0-h0.cbu.dli.321.r2]
at scala.collection.AbstractIterator.foreach(Iterator.scala:1425) ~[flink-scala_2.12-1.15.0-h0.cbu.dli.321.r2.jar:1.15.0-h0.cbu.dli.321.r2]
at scala.collection.IterableLike.foreach(IterableLike.scala:70) ~[flink-scala_2.12-1.15.0-h0.cbu.dli.321.r2.jar:1.15.0-h0.cbu.dli.321.r2]
at scala.collection.IterableLike.foreach$(IterableLike.scala:69) ~[flink-scala_2.12-1.15.0-h0.cbu.dli.321.r2.jar:1.15.0-h0.cbu.dli.321.r2]
at scala.collection.AbstractIterable.foreach(Iterable.scala:54) ~[flink-scala_2.12-1.15.0-h0.cbu.dli.321.r2.jar:1.15.0-h0.cbu.dli.321.r2]
at scala.collection.TraversableLike.map(TraversableLike.scala:233) ~[flink-scala_2.12-1.15.0-h0.cbu.dli.321.r2.jar:1.15.0-h0.cbu.dli.321.r2]
at scala.collection.TraversableLike.map$(TraversableLike.scala:226) ~[flink-scala_2.12-1.15.0-h0.cbu.dli.321.r2.jar:1.15.0-h0.cbu.dli.321.r2]
at scala.collection.AbstractTraversable.map(Traversable.scala:104) ~[flink-scala_2.12-1.15.0-h0.cbu.dli.321.r2.jar:1.15.0-h0.cbu.dli.321.r2]
at org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:182) ~[flink-table-planner_2.12-1.15.0-h0.cbu.dli.321.r2.jar:1.15.0-h0.cbu.dli.321.r2]
at org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1678) ~[flink-table-api-java-1.15.0-h0.cbu.dli.321.r2.jar:1.15.0-h0.cbu.dli.321.r2]
at org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:785) ~[flink-table-api-java-1.15.0-h0.cbu.dli.321.r2.jar:1.15.0-h0.cbu.dli.321.r2]
at org.apache.flink.table.api.internal.StatementSetImpl.execute(StatementSetImpl.java:108) ~[flink-table-api-java-1.15.0-h0.cbu.dli.321.r2.jar:1.15.0-h0.cbu.dli.321.r2]
at org.apache.inlong.sort.parser.result.FlinkSqlParseResult.executeLoadSqls(FlinkSqlParseResult.java:83) ~[?:?]
at org.apache.inlong.sort.parser.result.FlinkSqlParseResult.execute(FlinkSqlParseResult.java:62) ~[?:?]
at com.huawei.clouds.dataarts.migration.engine.converter.Entrance.run(Entrance.java:152) ~[?:?]

原因分析

用户在配置作业时,需确认分区信息与Hudi表的实际建表结果一致。

作业启动时会进行分区信息校验,主要检查是否存在分区,分区名称是否一致;若不一致,作业将出现异常。

解决方案

  1. 查看Hudi表建表语句。
  2. 在作业编辑界面修改“Hudi表属性配置”。
  3. 确保作业配置与Hudi表建表分区配置一致。

相关文档