Updated on 2022-11-18 GMT+08:00

Preparing the Developer Account

Scenario

A developer account is used to run the sample project. When developing components for different services, you need to assign different user permissions.

Procedure

  1. Log in to FusionInsight Manager.
  2. Choose System > Permission > Role > Create Role.

    1. Enter a role name, for example, developrole.
    2. Check whether Ranger authentication is enabled. For details, see How Do I Determine Whether the Ranger Authentication Is Used for a Service?
      • If yes, go to 3.
      • If no, edit the role to add the permissions required for service development based on the permission control type of the service. For details, see Table 1.
        Table 1 List of permissions

        Service

        Permissions to Be Granted

        HDFS

        In Configure Resource Permission, choose Name of the desired cluster > HDFS > File System, select Read, Write, and Execute for hdfs://hacluster, and click OK.

        Mapreduce/Yarn

        1. In Configure Resource Permission, choose Name of the desired cluster > HDFS > File System > hdfs://hacluster/, and select Read, Write, and Execute for the user. Choose Name of the desired cluster > HDFS > File System > hdfs://hacluster/ > user, select Read, Write, and Execute for mapred.

          To execute multiple component cases, perform the following operations:

          Choose Name of the desired cluster > HBase > HBase Scope > global and select the default option create.

          Choose Name of the desired cluster > HBase > HBase Scope > global > hbase, select hbase:meta, and click Execute.

          Choose Name of the desired cluster > HDFS > File System > hdfs://hacluster/ > user, and select Read, Write, and Execute for Hive.

          Choose Name of the desired cluster > HDFS > File System > hdfs://hacluster/ > user > hive, and select Read, Write, and Execute for warehouse.

          Choose Name of the desired cluster > HDFS > File System > hdfs://hacluster/ > tmp, and select Read, Write and Execute for hive-scratch. If examples exist, select Read, Write, Execute, and recursion for example.

          Choose Name of the desired cluster > Hive > Hive Read Write Privileges and select Query, Insert, Create and recursion for default. Click OK.

        2. Edit the role. In Configure Resource Permission, choose Name of the desired cluster > Yarn > Scheduler Queue > root, select the default option Submit, and click OK.

        HBase

        In Configure Resource Permission, choose Name of the desired cluster > HBase > HBase Scope > global, select the admin, create, read, write, and execute permissions, and click OK.

        Spark2x

        1. (Configure this parameter if HBase is installed.) In Configure Resource Permission, choose Name of the desired cluster > HBase > HBase Scope > global, select the default option create, and click OK.
        2. (Configure this parameter if HBase is installed.) Edit the role. In Configure Resource Permission, choose Name of the desired cluster > HBase > HBase Scope > global > hbase, select execute for hbase:meta, and click OK.
        3. Edit the role. In Configure Resource Permission, choose Name of the desired cluster > HDFS > File System > hdfs://hacluster/ > user, select Execute for hive, and click OK.
        4. Edit the role. In Configure Resource Permission, choose Name of the desired cluster > HDFS > File System > hdfs://hacluster/ > user > hive, select Read, Write, and Execute for warehouse, and click OK.
        5. Edit the role. In Configure Resource Permission, choose Name of the desired cluster > Hive > Hive Read Write Privileges, select the default option Create, and click OK.
        6. Edit the role. In Configure Resource Permission, choose Name of the desired cluster > Yarn > Scheduler Queue > root, select the default option Submit, and click OK.

        Hive

        In Configure Resource Permission, choose Name of the desired cluster > Yarn > Scheduler Queue > root, select the default option Submit and Admin, and click OK.

        NOTE:

        Extra operation permissions required for Hive application development must be obtained from the system administrator.

        ClickHouse

        In Configure Resource Permission, choose Name of the desired cluster > ClickHouse > ClickHouse Scope and select Create Privilege for the target database Click the database name, select the read and write permissions of the corresponding table based on the task scenario, and click OK.

        Flink

        1. In the Configure Resource Permission table, choose Name of the desired cluster > HDFS > File System > hdfs://hacluster/ > flink, select Read, Write, and Execute, and click Service in the Configure Resource Permission table to return.
        2. In Configure Resource Permission, choose Name of the desired cluster > Yarn > Scheduler Queue > root, select the default option Submit, and click OK.
          NOTE:

          If state backend is set to a path on HDFS, for example, hdfs://hacluster/flink-checkpoint, configure the read, write, and execute permissions on the hdfs://hacluster/flink-checkpoint directory.

        Kafka

        -

        Impala

        -

        Oozie

        1. In Configure Resource Permission, choose Name of the desired cluster > Oozie > Common User Privileges, and click OK.
        2. Edit the role. In Configure Resource Permission, choose Name of the desired cluster > HDFS > File System, select Read, Write, and Execute for hdfs://hacluster, and click OK.
        3. Edit the role. In Configure Resource Permission, choose Name of the desired cluster > Yarn, select Cluster Admin Operations, and click OK.

  3. Choose System > Permission > User Group > Create User Group to create a user group for the sample project, for example, developgroup.
  4. Choose System > Permission > User > Create to create a user for the sample project.
  5. Enter a username, for example, developuser, select a user type and user group to which the user is to be added according to Table 2, bind the role developrole obtain permissions, and click OK.

    Table 2 User type and user group list

    Service

    User Type

    User Group

    HDFS

    Machine-Machine

    Join the developgroup and supergroup groups.

    Set the primary group to supergroup.

    MapReduce/Yarn

    Machine-Machine

    Join the developgroup group.

    HBase

    Machine-Machine

    Join the hadoop group.

    Spark2x

    Machine-Machine/Human-Machine

    Join the developgroup group.

    If the user needs to interconnect with Kafka, add the Kafkaadmin user group.

    Hive

    Machine-Machine/Human-Machine

    Join the hive group.

    Kafka

    Machine-Machine

    Join the kafkaadmin group.

    Impala

    Machine-Machine

    Join the impala and supergroup group. Set the primary group to supergroup.

    Storm/CQL

    Human-Machine

    Join the storm group.

    ClickHouse

    Human-Machine

    Join the developgroup and supergroup groups. Set the primary group to supergroup.

    Oozie

    Human-Machine

    Join the hadoop, supergroup, and hive groups

    If the multi-instance function is enabled for Hive, the user must belong to a specific Hive instance group, for example, hive3.

    Flink

    Human-Machine

    Join the developgroup and hadoop groups.

    Set the primary group to developgroup.

    NOTE:

    If a user wants to interconnect with Kafka, a hybrid cluster with Flink and Kafka components is required, or cross-cluster mutual trust needs to be configured for the cluster with Flink and the cluster with Kafka components. Additionally, the created Flink user is added to the kafkaadmin user group.

  6. If Ranger authentication is enabled for the service, in addition to the permissions of the default user group and role, grant required permissions to the user or its role or user group on the Ranger web UI after the user is created. For details, see Configuring Component Permission Policies.
  7. On the homepage of FusionInsight Manager, choose System > Permission > User. Select developuser from the user list and click More > Download Authentication Credential to download authentication credentials. Save the downloaded package and decompress the file to obtain user.keytab and krb5.conf files. These files are used for security authentication during the sample project. For details, see the corresponding service development guide.

    If the user type is human-machine, you need to change the initial password before downloading the authentication credential file. Otherwise, Password has expired - change password to reset is displayed when you use the authentication credential file. As a result, security authentication fails.