Hudi作为目标端时,如果作业启动时出现异常,且错误信息中包含“HoodieIndexException: Unsupported index type BUCKET”怎么办?
问题描述
客户在Hudi属性配置中添加索引配置hoodie.index.type = BUCKET后启动作业异常。报错关键字“HoodieIndexException: Unsupported index type BUCKET”。
报错信息详情:
Caused by: com.huawei.clouds.dataarts.shaded.org.apache.hudi.exception. HoodieIndexException: Unsupported index type BUCKET at com.huawei.clouds.dataarts.shaded.org.apache.hudi.index.FlinkHoodieIndexFactory.createIndex(FlinkHoodieIndexFactory.java:61) ~[?:?] at com.huawei.clouds.dataarts.shaded.org.apache.hudi.client.HoodieFlinkWriteClient.createIndex(HoodieFlinkWriteClient.java:130) ~[?:?] at com.huawei.clouds.dataarts.shaded.org.apache.hudi.client.BaseHoodieWriteClient.<init>(BaseHoodieWriteClient.java:199) ~[?:?] at com.huawei.clouds.dataarts.shaded.org.apache.hudi.client.BaseHoodieWriteClient.<init>(BaseHoodieWriteClient.java:182) ~[?:?] at com.huawei.clouds.dataarts.shaded.org.apache.hudi.client.HoodieFlinkWriteClient.<init>(HoodieFlinkWriteClient.java:107) ~[?:?] at com.huawei.clouds.dataarts.migration.connector.hudi.client.WrappedHoodieFlinkWriteClient.<init>(WrappedHoodieFlinkWriteClient.java:72) ~[?:?] at com.huawei.clouds.dataarts.migration.connector.hudi.util.WrappedStreamerUtil.createWriteClient(WrappedStreamerUtil.java:48) ~[?:?] at com.huawei.clouds.dataarts.migration.connector.hudi.sink. StreamWriteOperatorCoordinator.lambda$start$2(StreamWriteOperatorCoordinator.java:252 ) ~[?:?] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_362] at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_362] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1890) ~[flink-dist-1.15.0-h0.cbu.dli.321.r2.jar:1.15.0-h0.cbu.dli.321.r2] at com.huawei.clouds.dataarts.migration.connector.hudi.sink.StreamWriteOperatorCoordinator.start(StreamWriteOperatorCoordinator.java:242) ~[?:?] at org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder.start(OperatorCoordinatorHolder.java:194) ~[flink-dist-1.15.0-h0.cbu.dli.321.r2.jar:1.15.0-h0.cbu.dli.321.r2] at org.apache.flink.runtime.scheduler.DefaultOperatorCoordinatorHandler.startOperatorCoordinators(DefaultOperatorCoordinatorHandler.java:164) ~[flink-dist-1.15.0-h0.cbu.dli.321.r2.jar:1.15.0-h0.cbu.dli.321.r2] at org.apache.flink.runtime.scheduler.DefaultOperatorCoordinatorHandler.startAllOperatorCoordinators(DefaultOperatorCoordinatorHandler.java:82) ~[flink-dist-1.15.0-h0.cbu.dli.321.r2.jar:1.15.0-h0.cbu.dli.321.r2] at org.apache.flink.runtime.scheduler.SchedulerBase.startScheduling(SchedulerBase.java:624) ~[flink-dist-1.15.0-h0.cbu.dli.321.r2.jar:1.15.0-h0.cbu.dli.321.r2] at org.apache.flink.runtime.jobmaster.JobMaster.startScheduling(JobMaster.java:1012) ~[flink-dist-1.15.0-h0.cbu.dli.321.r2.jar:1.15.0-h0.cbu.dli.321.r2] at org.apache.flink.runtime.jobmaster.JobMaster.startJobExecution(JobMaster.java:929) ~[flink-dist-1.15.0-h0.cbu.dli.321.r2.jar:1.15.0-h0.cbu.dli.321.r2] at org.apache.flink.runtime.jobmaster.JobMaster.onStart(JobMaster.java:388) ~[flink-dist-1.15.0-h0.cbu.dli.321.r2.jar:1.15.0-h0.cbu.dli.321.r2] at org.apache.flink.runtime.rpc.RpcEndpoint.internalCallOnStart(RpcEndpoint.java:181) ~[flink-dist-1.15.0-h0.cbu.dli.321.r2.jar:1.15.0-h0.cbu.dli.321.r2] at org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.lambda$start$0(AkkaRpcActor.java:612) ~[flink-rpc-akka_2d4d8fb7-58d5-47ce-b0b0-9250f1bf4d36.jar:1.15.0-h0.cbu.dli.321.r2] at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68) ~[flink-rpc-akka_2d4d8fb7-58d5-47ce-b0b0-9250f1bf4d36.jar:1.15.0-h0.cbu.dli.321.r2] at org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.start(AkkaRpcActor.java:611) ~[flink-rpc-akka_2d4d8fb7-58d5-47ce-b0b0-9250f1bf4d36.jar:1.15.0-h0.cbu.dli.321.r2] at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleControlMessage(AkkaRpcActor.java:185) ~[flink-rpc-akka_2d4d8fb7-58d5-47ce-b0b0-9250f1bf4d36.jar:1.15.0-h0.cbu.dli.321.r2] at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24) ~[?:?] at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:123) ~[flink-scala_2.12-1.15.0-h0.cbu.dli.321.r2.jar:1.15.0-h0.cbu.dli.321.r2] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122) ~[flink-scala_2.12-1.15.0-h0.cbu.dli.321.r2.jar:1.15.0-h0.cbu.dli.321.r2] at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) ~[flink-scala_2.12-1.15.0-h0.cbu.dli.321.r2.jar:1.15.0-h0.cbu.dli.321.r2]
原因分析
实时集成作业Hudi通过Hudi connector完成。在配置属性时,需使用Flink on Hudi的配置名称,将hoodie.index.type替换为index.type。
解决方案
将 migration 作业中的配置项名称改为 index.type。