Help Center>
MapReduce Service>
FAQs>
Big Data Service Development>
How Do I View the Hive Table Created by Another User?
Updated on 2023-09-05 GMT+08:00
How Do I View the Hive Table Created by Another User?
Versions earlier than MRS 3.x:
- Log in to MRS Manager and choose System > Permission > Manage Role.
- Click Create Role, and set Role Name and Description.
- In the Permission table, choose Hive > Hive Read Write Privileges.
- In the database list, click the name of the database where the table created by user B is stored. The table is displayed.
- In the Permission column of the table created by user B, select SELECT.
- Click OK, and return to the Role page.
- Choose System > Manage User. Locate the row containing user A, click Modify to bind the new role to user A, and click OK. After about 5 minutes, user A can access the table created by user B.
MRS 3.x or later:
- Log in to FusionInsight Manager and choose Cluster > Services. On the page that is displayed, choose Hive. On the displayed page, choose More, and check whether Enable Ranger is grayed out.
- Log in to FusionInsight Manager and choose System > Permission > Role.
- Click Create Role, and set Role Name and Description.
- In the Configure Resource Permission table, choose Name of the desired cluster > Hive > Hive Read Write Privileges.
- In the database list, click the name of the database where the table created by user B is stored. The table is displayed.
- In the Permission column of the table created by user B, select Select.
- Click OK, and return to the Role page.
- Choose Permission > User. On the Local User page that is displayed, locate the row containing user A, click Modify in the Operation column to bind the new role to user A, and click OK. After about 5 minutes, user A can access the table created by user B.
- Perform the following steps to add the Ranger access permission policy of Hive:
- Log in to FusionInsight Manager as a Hive administrator and choose Cluster > Services. On the page that is displayed, choose Ranger. On the displayed page, click the URL next to Ranger WebUI to go to the Ranger management page.
- On the home page, click the component plug-in name in the HADOOP SQL area, for example, Hive.
- On the Access tab page, click Add New Policy to add a Hive permission control policy.
- In the Create Policy dialog box that is displayed, set the following parameters:
- Policy Name: Enter a policy name, for example, table_test_hive.
- database: Enter or select the database where the table created by user B is stored, for example, default.
- table: Enter or select the table created by user B, for example, test.
- column: Enter and select a column, for example, *.
- In the Allow Conditions area, click Select User, select user A, click Add Permissions, and select select.
- Click Add.
- Perform the following steps to add the Ranger access permission policy of HDFS:
- Log in to FusionInsight Manager as user rangeradmin and choose Cluster > Services. On the page that is displayed, choose Ranger. On the displayed page, click the URL next to Ranger WebUI to go to the Ranger management page.
- On the home page, click the component plug-in name in the HDFS area, for example, hacluster.
- Click Add New Policy to add an HDFS permission control policy.
- In the Create Policy dialog box that is displayed, set the following parameters:
- Policy Name: Enter a policy name, for example, tablehdfs_test.
- Resource Path: Set this parameter to the HDFS path where the table created by user B is stored, for example, /user/hive/warehouse/Database name/Table name.
- In the Allow Conditions area, select user A for Select User, click Add Permissions in the Permissions column, and select Read and Execute.
- Click Add.
- View basic information about the policy in the policy list. After the policy takes effect, user A can view the table created by user B.
Parent topic: Big Data Service Development
Big Data Service Development FAQs
- Can MRS Run Multiple Flume Tasks at a Time?
- How Do I Change FlumeClient Logs to Standard Logs?
- Where Are the JAR Files and Environment Variables of Hadoop Stored?
- What Compression Algorithms Does HBase Support?
- Can MRS Write Data to HBase Through an HBase External Table of Hive?
- How Do I View HBase Logs?
- How Do I Set the TTL for an HBase Table?
- How Do I Connect to HBase of MRS Through HappyBase?
- How Do I Change the Number of HDFS Replicas?
- How Do I Modify the HDFS Active/Standby Switchover Class?
- What Data Type in Hive Tables Is Recommended for the Number Type of DynamoDB?
- Can the Hive Driver Be Interconnected with DBCP2?
- How Do I View the Hive Table Created by Another User?
- Where Can I Download the Dependency Package (com.huawei.gaussc10) in the Hive Sample Project?
- Can I Export the Query Result of Hive Data?
- What Should I Do If an Error Occurs When Hive Runs the beeline -e Command to Execute Multiple Statements?
- What Should I Do If a HiveSQL/HiveScript Job Fails to be Submitted After Hive Is Added?
- What Can I Do If the Excel File Downloaded by Hue Cannot Be Opened?
- How Do I Do If Sessions Are Not Released After Hue Connects to HiveServer and the Error Message "over max user connections" Is Displayed?
- How Do I Reset Kafka Data?
- What Access Protocols Are Supported by Kafka?
- What Should I Do If the Error Message "Not Authorized to access group XXX" Is Displayed When Kafka Topics Are Consumed?
- What Compression Algorithms Does Kudu Support?
- How Do I View Kudu Logs?
- How Do I Handle the Kudu Service Exceptions Generated During Cluster Creation?
- Does MRS Support Python Code?
- Does OpenTSDB Support Python APIs?
- How Do I Configure Other Data Sources on Presto?
- How Do I Update the Ranger Certificate in MRS 1.9.3?
- How Do I Connect to Spark Shell from MRS?
- How Do I Connect to Spark Beeline from MRS?
- Where Are the Execution Logs of Spark Jobs Stored?
- How Do I Specify a Log Path When Submitting a Task in an MRS Storm Cluster?
- How Do I Check the ResourceManager Configuration of Yarn?
- How Do I Modify the allow_drop_detached Parameter of ClickHouse?
- What Should I Do If an Alarm Indicating Insufficient Memory Is Reported During Spark Task Execution?
- How Do I Add a Periodic Deletion Policy to Prevent Large ClickHouse System Table Logs?
- How Do I Obtain a Spark JAR File?
- What Can I Do If an Alarm is Generated Because the NameNode Is not Restarted on Time After the hdfs-site.xml File Is Modified?
- What Should I Do If It Takes a Long Time for Spark SQL to Access Hive Partitioned Tables Before a Job Starts?
- What Should I Do If the spark.yarn.executor.memoryOverhead Setting Does Not Take Effect?
- How Do I Change the Time Zone of the ClickHouse Service?
- What Should I Do If the Connection to the ClickHouse Server Fails and Error Code 516 Is Reported?
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
The system is busy. Please try again later.
For any further questions, feel free to contact us through the chatbot.
Chatbotmore