Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive
On this page

Adding a Ranger Access Permission Policy for Impala

Updated on 2024-10-08 GMT+08:00
NOTE:

This section applies only to MRS 3.1.5 or later.

Scenario

After the MRS cluster with Ranger installed is created, Hive/Impala permission control is integrated into Ranger. Impala reuses Hive permission policies. This section describes how to integrate Hive into Ranger.

Prerequisites

  • The Ranger service has been installed and is running properly.
  • You have created users, user groups, or roles for which you want to configure permissions.

Procedure

  1. For normal clusters with Kerberos authentication disabled, check whether Ranger authentication is enabled for Hive and Impala.

    If Ranger authentication is not enabled, enable Ranger authentication for Hive, restart the Hive service, and then restart the Impala service. Enable Ranger authentication for Impala and restart the Impala service. By default, Ranger is enabled for security clusters with Kerberos authentication enabled. You can skip this step.

  2. For normal clusters, log in to FusionInsight Manager, choose Cluster > Services > Ranger, choose Configurations > All Configurations, and click UserSync(Role). Add the following configuration parameter to the custom configuration item ranger.usersync.config.expandor and restart Ranger. Skip this step for security clusters with Kerberos authentication enabled.

    Parameter

    Value

    ranger.usersync.sync.source

    ldap

  3. Log in to the Ranger web UI as the Ranger administrator rangeradmin. For details, see Logging In to the Ranger Web UI. Select Hive.

  1. Add an access control policy.

    1. In the HADOOP SQL area, click the added service Hive.
    2. Click Add New Policy to add an access control policy.
    3. Set the parameters according to Table 1. Use the default values for the parameters that are not listed in the table.
      Table 1 Parameters

      Parameter

      Description

      Example Value

      Policy Name

      Policy Name

      testuser

      database

      Name of the database that the policy allows to access

      default

      table

      Name of the table corresponding to the database that the policy allows to access

      dataorigin

      Hive Column

      Column name of the table corresponding to the database that the policy allows to access

      name

      Allow Conditions

      • Select Group: user group that the policy allows to access
      • Select User: user in the user group that the policy allows to access
      • Permissions: permissions that the policy allows the user to have
      • Select Group: hive
      • Select User: testuser
      • Permissions: select and Create
      Figure 1 Adding the testuser access policy
    4. Click Add to add the policy. According to the preceding policy, the testuser user in the hive user group has the Create and Select permissions on the name column of the dataorigin table in the default database of Hive, but no permissions to access other columns.

  2. Log in to the Impala client and check whether Ranger has been integrated with Impala.

    1. Log in to the node where the client is installed as the client installation user and run the following command to initialize environment variables:

      source /opt/client/bigdata_env

    2. Set up a connection and log in as user testuser.
      • For normal clusters with Kerberos authentication disabled, run the following command:

        impala-shell -i <Impalad node IP address> -u testuser

      • For security clusters with Kerberos authentication enabled, run the following command:

        kinit testuser

        Enter the password to log in.

        impala-shell -i <Impalad node IP address>

    3. Query data and check whether Ranger is integrated.
      • Failed to run the select * from dataorigin command and an error message indicating insufficient permission is displayed.
      • The select name from dataorigin command is executed successfully and the preset permission is met.

      CAUTION:
      • If you have specified an HDFS path when running commands, you need to be assigned the read, write, and execution permissions on the HDFS path. For details, see Adding a Ranger Access Permission Policy for HDFS. You do not need to configure the Ranger policy of HDFS. You can use the Hive permission plug-in to add permissions to the role and assign the role to the corresponding user. If the HDFS Ranger policy can match the file or directory permission of the Hive database table, the HDFS Ranger policy is preferentially used.
      • If a table is created in Hive, run the invalidate metadata command in Impala to update metadata. In this case, you need to grant the refresh permission to the user or run the invalidate metadata command as user hive. Otherwise, the following error message is displayed.

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback