El contenido no se encuentra disponible en el idioma seleccionado. Estamos trabajando continuamente para agregar más idiomas. Gracias por su apoyo.

Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive
On this page
Help Center/ MapReduce Service/ Developer Guide (LTS)/ Preparing MRS Application Development User

Preparing MRS Application Development User

Updated on 2024-08-10 GMT+08:00

Scenario

A developer account is used to run the sample project. When developing components for different services, you need to assign different user permissions.

Procedure

  1. Log in to FusionInsight Manager.
  2. Choose System > Permission > Role > Create Role.

    1. Enter a role name, for example, developrole.
    2. Check whether Ranger authentication is enabled. For details, see How Do I Determine Whether the Ranger Authentication Is Used for a Service?.
      • If yes, go to 3.
      • If no, edit the role to add the permissions required for service development based on the permission control type of the service. For details, see Table 1.
        Table 1 List of permissions

        Service

        Permissions to Be Granted

        HDFS

        In Configure Resource Permission, choose Name of the desired cluster > HDFS > File System, select Read, Write, and Execute for hdfs://hacluster, and click OK.

        Mapreduce/Yarn

        1. In Configure Resource Permission, choose Name of the desired cluster > HDFS > File System > hdfs://hacluster/, and select Read, Write, and Execute for the user. Choose Name of the desired cluster > HDFS > File System > hdfs://hacluster/ > user, select Read, Write, and Execute for mapred.

          To execute multiple component cases, perform the following operations:

          Choose Name of the desired cluster > HBase > HBase Scope > global and select the default option create.

          Choose Name of the desired cluster > HBase > HBase Scope > global > hbase, select hbase:meta, and click Execute.

          Choose Name of the desired cluster > HDFS > File System > hdfs://hacluster/ > user, and select Read, Write, and Execute for Hive.

          Choose Name of the desired cluster > HDFS > File System > hdfs://hacluster/ > user > hive, and select Read, Write, and Execute for warehouse.

          Choose Name of the desired cluster > HDFS > File System > hdfs://hacluster/ > tmp, and select Read, Write and Execute for hive-scratch. If examples exist, select Read, Write, Execute, and recursion for example.

          Choose Name of the desired cluster > Hive > Hive Read Write Privileges and select Query, Insert, Create and recursion for default. Click OK.

        2. Edit the role. In Configure Resource Permission, choose Name of the desired cluster > Yarn > Scheduler Queue > root, select the default option Submit, and click OK.

        HBase

        In Configure Resource Permission, choose Name of the desired cluster > HBase > HBase Scope > global, select the admin, create, read, write, and execute permissions, and click OK.

        Spark2x

        1. (Configure this parameter if HBase is installed.) In Configure Resource Permission, choose Name of the desired cluster > HBase > HBase Scope > global, select the default option create, and click OK.
        2. (Configure this parameter if HBase is installed.) Edit the role. In Configure Resource Permission, choose Name of the desired cluster > HBase > HBase Scope > global > hbase, select execute for hbase:meta, and click OK.
        3. Edit the role. In Configure Resource Permission, choose Name of the desired cluster > HDFS > File System > hdfs://hacluster/ > user, select Execute for hive, and click OK.
        4. Edit the role. In Configure Resource Permission, choose Name of the desired cluster > HDFS > File System > hdfs://hacluster/ > user > hive, select Read, Write, and Execute for warehouse, and click OK.
        5. Edit the role. In Configure Resource Permission, choose Name of the desired cluster > Hive > Hive Read Write Privileges, select the default option Create, and click OK.
        6. Edit the role. In Configure Resource Permission, choose Name of the desired cluster > Yarn > Scheduler Queue > root, select the default option Submit, and click OK.

        Hive

        In Configure Resource Permission, choose Name of the desired cluster > Yarn > Scheduler Queue > root, select the default option Submit and Admin, and click OK.

        NOTE:

        Extra operation permissions required for Hive application development must be obtained from the system administrator.

        ClickHouse

        In Configure Resource Permission, choose Name of the desired cluster > ClickHouse > ClickHouse Scope and select Create Privilege for the target database Click the database name, select the read and write permissions of the corresponding table based on the task scenario, and click OK.

        IoTDB

        1. In the Configure Resource Permission table, choose Name of the desired cluster > IoTDB > Common User Permission, select Set Storage Group for the root directory, and click OK to create different storage groups.

        2. In the Configure Resource Permission table, choose Name of the desired cluster > IoTDB > Common User Permission > root, select the corresponding storage group, select the Create, Modify, Write, Read, or Delete permission based on the task scenario, and click OK.

        HetuEngine

        1. In the Configure Resource Permission table, choose Name of the desired cluster > Hive and select Hive Admin Privilege.
        2. In Configure Resource Permission, choose Name of the desired cluster > Hive > Hive Read Write Privileges, select permissions based on task scenarios, and click OK.
          NOTE:
          • In the default database, select Query to query the tables of other users.
          • In the default database, select Delete and Insert to import data to tables of other users.

        Flink

        1. In the Configure Resource Permission table, choose Name of the desired cluster > HDFS > File System > hdfs://hacluster/ > flink, select Read, Write, and Execute, and click Service in the Configure Resource Permission table to return.
        2. In Configure Resource Permission, choose Name of the desired cluster > Yarn > Scheduler Queue > root, select the default option Submit, and click OK.
          NOTE:

          If state backend is set to a path on HDFS, for example, hdfs://hacluster/flink-checkpoint, configure the read, write, and execute permissions on the hdfs://hacluster/flink-checkpoint directory.

        Kafka

        -

        Impala

        -

        Oozie

        1. In Configure Resource Permission, choose Name of the desired cluster > Oozie > Common User Privileges, and click OK.
        2. Edit the role. In Configure Resource Permission, choose Name of the desired cluster > HDFS > File System, select Read, Write, and Execute for hdfs://hacluster, and click OK.
        3. Edit the role. In Configure Resource Permission, choose Name of the desired cluster > Yarn, select Cluster Admin Operations, and click OK.

  3. Choose System > Permission > User Group > Create User Group to create a user group for the sample project, for example, developgroup.
  4. Choose System > Permission > User > Create to create a user for the sample project.
  5. Enter a username, for example, developuser, select a user type and user group to which the user is to be added according to Table 2, bind the role developrole obtain permissions, and click OK.

    Table 2 User type and user group list

    Service

    User Type

    User Group

    HDFS

    Machine-Machine

    Join the developgroup and supergroup groups.

    Set the primary group to supergroup.

    MapReduce/Yarn

    Machine-Machine

    Join the developgroup group.

    HBase

    Machine-Machine

    Join the hadoop group.

    Spark

    Machine-Machine/Human-Machine

    Join the developgroup group.

    If the user needs to interconnect with Kafka, add the Kafkaadmin user group.

    Hive

    Machine-Machine/Human-Machine

    Join the hive group.

    Kafka

    Machine-Machine

    Join the kafkaadmin group.

    Impala

    Machine-Machine

    Join the impala and supergroup group. Set the primary group to supergroup.

    HetuEngine

    Human-Machine

    Join the hive group. Set the primary group to hive.

    Storm/CQL

    Human-Machine

    Join the storm group.

    ClickHouse

    Human-Machine

    Join the developgroup and supergroup groups. Set the primary group to supergroup.

    Oozie

    Human-Machine

    Join the hadoop, supergroup, and hive groups

    If the multi-instance function is enabled for Hive, the user must belong to a specific Hive instance group, for example, hive3.

    Flink

    Human-Machine

    Join the developgroup and hadoop groups.

    Set the primary group to developgroup.

    NOTE:

    If a user wants to interconnect with Kafka, a hybrid cluster with Flink and Kafka components is required, or cross-cluster mutual trust needs to be configured for the cluster with Flink and the cluster with Kafka components. Additionally, the created Flink user is added to the kafkaadmin user group.

    IoTDB

    Machine-Machine

    Join the iotdbgroup group.

  6. If Ranger authentication is enabled for the service, in addition to the permissions of the default user group and role, grant required permissions to the user or its role or user group on the Ranger web UI after the user is created. For details, see Configuring Component Permission Policies.
  7. On the homepage of FusionInsight Manager, choose System > Permission > User. Select developuser from the user list and click More > Download Authentication Credential to download authentication credentials. Save the downloaded package and decompress the file to obtain user.keytab and krb5.conf files. These files are used for security authentication during the sample project. For details, see the corresponding service development guide.

    NOTE:

    If the user type is human-machine, you need to change the initial password before downloading the authentication credential file. Otherwise, Password has expired - change password to reset is displayed when you use the authentication credential file. As a result, security authentication fails.

Utilizamos cookies para mejorar nuestro sitio y tu experiencia. Al continuar navegando en nuestro sitio, tú aceptas nuestra política de cookies. Descubre más

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback