Failed to Create or Delete a Table in Spark Beeline
Issue
When the customer frequently creates or deletes a large number of users in Spark Beeline, some users occasionally fail to create or delete tables.
Symptom
The procedure for creating a table is as follows:
CREATE TABLE wlg_test001 (start_time STRING,value INT);
The following error message is displayed:
Error: org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:Failed to grant permission on HDFSjava.lang.reflect.UndeclaredThrowableException); (state=,code=0)
Cause Analysis
- View metastore logs.
- View HDFS logs.
- Compare permission (test001 is a table created by a user in abnormal state, and test002 is a table created by a user in normal state).
- An error similar to the following is reported when a table is dropped:
0: jdbc:hive2://192.168.1.42:10000/> drop table dataplan_modela_csbch2; Error: Error while compiling statement: FAILED: SemanticException Unable to fetch table dataplan_modela_csbch2. java.security.AccessControlException: Permission denied: user=CSB_csb_3f8_x48ssrbt, access=READ, inode="/user/hive/warehouse/hive_csb_csb_3f8_x48ssrbt_5lbi2edu.db/dataplan_modela_csbch2":spark:hive:drwx------
- Analyze the cause.
The default user created during cluster creation uses the same UID, causing user disorder. This problem is triggered when a large number of users are created. As a result, the Hive user does not have the permission to create tables occasionally.
Procedure
Restart the sssd process of the cluster.
Run the service sssd restart command as the root user to restart the sssd process and run the ps -ef | grep sssd command to check whether the sssd process is running properly.
In normal cases, the /usr/sbin/sssd process and three sub-processes /usr/libexec/sssd/sssd_be, /usr/libexec/sssd/sssd_nss and /usr/libexec/sssd/sssd_pam exist.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.