文档首页/ MapReduce服务 MRS/ 故障排除/ 使用Hive/ 输入文件数超出设置限制导致任务执行失败
更新时间:2024-05-28 GMT+08:00

输入文件数超出设置限制导致任务执行失败

问题背景与现象

Hive执行查询操作时报Job Submission failed with exception 'java.lang.RuntimeException(input file number exceeded the limits in the conf;input file num is: 2380435,max heap memory is: 16892035072,the limit conf is: 500000/4)',此报错中具体数值根据实际情况会发生变化,具体报错信息如下:

ERROR : Job Submission failed with exception 'java.lang.RuntimeException(input file numbers exceeded the limits in the conf;
 input file num is: 2380435 ,
 max heap memory is: 16892035072 ,
 the limit conf is: 500000/4)'
java.lang.RuntimeException: input file numbers exceeded the limits in the conf;
 input file num is: 2380435 ,
 max heap memory is: 16892035072 ,
 the limit conf is: 500000/4
	at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.checkFileNum(ExecDriver.java:545)
	at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:430)
	at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:137)
	at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:158)
	at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:101)
	at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1965)
	at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1723)
	at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1475)
	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1283)
	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1278)
	at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:167)
	at org.apache.hive.service.cli.operation.SQLOperation.access$200(SQLOperation.java:75)
	at org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:245)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1710)
	at org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:258)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)

Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask (state=08S01,code=1)

原因分析

MapReduce任务提交前对输入文件数的检查策略:在提交的MapReduce任务中,允许的最大输入文件数和HiveServer最大堆内存的比值,例如500000/4(默认值),表示每4GB堆内存最大允许500000个输入文件。在输入的文件数超出此限制时则会发生此错误。

解决办法

  1. 进入Hive服务配置页面:

    • MRS 3.x之前版本,单击集群名称,登录集群详情页面,选择“组件管理 > Hive > 服务配置”,单击“基础配置”下拉菜单,选择“全部配置”。

      如果集群详情页面没有“组件管理”页签,请先完成IAM用户同步(在集群详情页的“概览”页签,单击“IAM用户同步”右侧的“同步”进行IAM用户同步)。

    • MRS 3.x及后续版本,登录FusionInsight Manager,然后选择“集群 > 服务 > Hive > 配置 > 全部配置”。

  2. 搜索hive.mapreduce.input.files2memory配置项,并修改hive.mapreduce.input.files2memory配置的值到合适值,根据实际内存和任务情况对此值进行调整。
  3. 保存配置并重启受影响的服务或者实例。
  4. 如调整后问题仍未解决,请根据业务情况调整HiveServer的GC参数至合理的值。