文档首页/ MapReduce服务 MRS/ 组件操作指南(安卡拉区域)/ 使用HBase/ HBase常见问题/ 使用集群内节点执行批量导入,为什么LoadIncrementalHFiles工具执行失败报“Permission denied”的异常
更新时间:2024-11-29 GMT+08:00

使用集群内节点执行批量导入,为什么LoadIncrementalHFiles工具执行失败报“Permission denied”的异常

问题

在普通集群中手动创建Linux用户,并使用集群内DataNode节点执行批量导入时,为什么LoadIncrementalHFiles工具执行失败报“Permission denied”的异常?

2020-09-20 14:53:53,808 WARN  [main] shortcircuit.DomainSocketFactory: error creating DomainSocket
java.net.ConnectException: connect(2) error: Permission denied when trying to connect to '/var/run/FusionInsight-HDFS/dn_socket'
	at org.apache.hadoop.net.unix.DomainSocket.connect0(Native Method)
	at org.apache.hadoop.net.unix.DomainSocket.connect(DomainSocket.java:256)
	at org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory.createSocket(DomainSocketFactory.java:168)
	at org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.nextDomainPeer(BlockReaderFactory.java:804)
	at org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.createShortCircuitReplicaInfo(BlockReaderFactory.java:526)
	at org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache.create(ShortCircuitCache.java:785)
	at org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache.fetchOrCreate(ShortCircuitCache.java:722)
	at org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getBlockReaderLocal(BlockReaderFactory.java:483)
	at org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:360)
	at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:663)
	at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:594)
	at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:776)
	at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:845)
	at java.io.DataInputStream.readFully(DataInputStream.java:195)
	at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:401)
	at org.apache.hadoop.hbase.io.hfile.HFile.isHFileFormat(HFile.java:651)
	at org.apache.hadoop.hbase.io.hfile.HFile.isHFileFormat(HFile.java:634)
	at org.apache.hadoop.hbase.tool.LoadIncrementalHFiles.visitBulkHFiles(LoadIncrementalHFiles.java:1090)
	at org.apache.hadoop.hbase.tool.LoadIncrementalHFiles.discoverLoadQueue(LoadIncrementalHFiles.java:1006)
	at org.apache.hadoop.hbase.tool.LoadIncrementalHFiles.prepareHFileQueue(LoadIncrementalHFiles.java:257)
	at org.apache.hadoop.hbase.tool.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:364)
	at org.apache.hadoop.hbase.tool.LoadIncrementalHFiles.run(LoadIncrementalHFiles.java:1263)
	at org.apache.hadoop.hbase.tool.LoadIncrementalHFiles.run(LoadIncrementalHFiles.java:1276)
	at org.apache.hadoop.hbase.tool.LoadIncrementalHFiles.run(LoadIncrementalHFiles.java:1311)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
	at org.apache.hadoop.hbase.tool.LoadIncrementalHFiles.main(LoadIncrementalHFiles.java:1333)

回答

如果LoadIncrementalHFiles工具依赖的Client在集群内安装,且和DataNode在相同的节点上,在工具执行过程中HDFS会创建短路读提高性能。短路读依赖“/var/run/FusionInsight-HDFS”目录(“dfs.domain.socket.path”),该目录默认权限是750。而当前Linux用户没有权限操作该目录。

上述问题可通过执行以下方法解决:

方法一:创建新用户(推荐使用)。

  1. 通过Manager页面创建新的用户,该用户属组中默认包含ficommon组。

    [root@xxx-xxx-xxx-xxx ~]# id test
    uid=20038(test) gid=9998(ficommon) groups=9998(ficommon)

  2. 重新执行ImportData。

方法二:修改当前用户的属组。

  1. 将该用户添加到ficommon组中。

    [root@xxx-xxx-xxx-xxx ~]# usermod -a -G ficommon test
    [root@xxx-xxx-xxx-xxx ~]# id test
    uid=2102(test) gid=2102(test) groups=2102(test),9998(ficommon)

  2. 重新执行ImportData。