Why Does the ImportTsv Tool Display "Permission denied"
Question
When the same Linux user (for example, user omm) as and a different Kerberos user (for example, user admin) from the Region Server are used, why does the ImportTsv tool fail to be executed and the error message "Permission denied" is displayed?
Exception in thread "main" org.apache.hadoop.security.AccessControlException: Permission denied: user=admin, access=WRITE, inode="/user/omm-bulkload/hbase-staging/partitions_cab16de5-87c2-4153-9cca-a6f4ed4278a6":hbase:hadoop:drwx--x--x at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:342) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:315) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:231) at com.xxx.hadoop.adapter.hdfs.plugin.HWAccessControlEnforce.checkPermission(HWAccessControlEnforce.java:69) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1789) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1773) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1756) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2490) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2425) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2308) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:745) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:434) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:973) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2260) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2256) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1781) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2254)
Answer
The ImportTsv tool creates a partition file in the HBase temporary directory specified by hbase.fs.tmp.dir in the Client installation path /HBase/hbase/conf/hbase-site.xml file. Therefore, the client (Kerberos user) must have the rwx permission on the specified temporary directory to perform the ImportTsv operation. The default value of hbase.fs.tmp.dir is /user/${user.name}/hbase-staging (for example, /user/omm/hbase-staging). $ {user.name} indicates the OS username (user omm). The client (Kerberos user, for example, user admin) does not have the rwx permission on the directory.
To solve the preceding problem, perform the following steps:
- On the client, set hbase.fs.tmp.dir to the directory of the current Kerberos user (for example, /user/admin/hbase-staging), or provide the rwx permission required by the configured directory for the client (Kerberos user).
- Perform the ImportTsv operation again.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot