Preparing a Developer Account
Scenario
A developer account is used to run the sample project. When developing components for different services, you need to assign different user permissions.
Procedure
- Log in to FusionInsight Manager.
- Choose System > Permission > Role > Create Role.
- Enter a role name, for example, developrole.
- Check whether Ranger authentication is enabled. For details, see How Do I Determine Whether the Ranger Authentication Is Used for a Service?
- If yes, go to 3.
- If no, edit the role to add the permissions required for service development based on the permission control type of the service. For details, see Table 1.
Table 1 List of permissions Service
Permissions to Be Granted
HDFS
In Configure Resource Permission, choose Name of the desired cluster > HDFS > File System, select Read, Write, and Execute for hdfs://hacluster, and click OK.
MapReduce/Yarn
- In Configure Resource Permission, choose Name of the desired cluster > HDFS > File System > hdfs://hacluster/, and select Read, Write, and Execute for the user. Choose Name of the desired cluster > HDFS > File System > hdfs://hacluster/ > user, select Read, Write, and Execute for mapred.
To execute multiple component cases, perform the following operations:
Choose Name of the desired cluster > HBase > HBase Scope > global and select the default option create.
Choose Name of the desired cluster > HBase > HBase Scope > global > hbase, select hbase:meta, and click Execute.
Choose Name of the desired cluster > HDFS > File System > hdfs://hacluster/ > user, and select Read, Write, and Execute for Hive.
Choose Name of the desired cluster > HDFS > File System > hdfs://hacluster/ > user > hive, and select Read, Write, and Execute for warehouse.
Choose Name of the desired cluster > HDFS > File System > hdfs://hacluster/ > tmp, and select Read, Write and Execute for hive-scratch. If examples exist, select Read, Write, Execute, and recursion for example.
Choose Name of the desired cluster > Hive > Hive Read Write Privileges and select Query, Insert, Create and recursion for default. Click OK.
- Edit the role. In Configure Resource Permission, choose Name of the desired cluster > Yarn > Scheduler Queue > root, select the default option Submit, and click OK.
HBase
In Configure Resource Permission, choose Name of the desired cluster > HBase > HBase Scope > global, select the admin, create, read, write, and execute permissions, and click OK.
Spark2x
- (Configure this parameter if HBase is installed.) In Configure Resource Permission, choose Name of the desired cluster > HBase > HBase Scope > global, select the default option create, and click OK.
- (Configure this parameter if HBase is installed.) Edit the role. In Configure Resource Permission, choose Name of the desired cluster > HBase > HBase Scope > global > hbase, select execute for hbase:meta, and click OK.
- Edit the role. In Configure Resource Permission, choose Name of the desired cluster > HDFS > File System > hdfs://hacluster/ > user, select Execute for hive, and click OK.
- Edit the role. In Configure Resource Permission, choose Name of the desired cluster > HDFS > File System > hdfs://hacluster/ > user > hive, select Read, Write, and Execute for warehouse, and click OK.
- Edit the role. In Configure Resource Permission, choose Name of the desired cluster > Hive > Hive Read Write Privileges, select the default option Create, and click OK.
- Edit the role. In Configure Resource Permission, choose Name of the desired cluster > Yarn > Scheduler Queue > root, select the default option Submit, and click OK.
Hive
In Configure Resource Permission, choose Name of the desired cluster > Yarn > Scheduler Queue > root, select the default option Submit and Admin, and click OK.
NOTE:Extra operation permissions required for Hive application development must be obtained from the system administrator.
ClickHouse
In Configure Resource Permission, choose Name of the desired cluster > ClickHouse > ClickHouse Scope and select Create Privilege for the target database Click the database name, select the read and write permissions of the corresponding table based on the task scenario, and click OK.
Flink
- In the Configure Resource Permission table, choose Name of the desired cluster > HDFS > File System > hdfs://hacluster/ > flink, select Read, Write, and Execute, and click Service in the Configure Resource Permission table to return.
- In Configure Resource Permission, choose Submit, and click OK.
NOTE:
If state backend is set to a path on HDFS, for example, hdfs://hacluster/flink-checkpoint, configure the read, write, and execute permissions on the hdfs://hacluster/flink-checkpoint directory.
, select the default option
GraphBase
-
Kafka
-
Impala
-
Storm/CQL
-
Oozie
- In Configure Resource Permission, choose Name of the desired cluster > Oozie > Common User Privileges, and click OK.
- Edit the role. In Configure Resource Permission, choose Name of the desired cluster > HDFS > File System, select Read, Write, and Execute for hdfs://hacluster, and click OK.
- Edit the role. In Configure Resource Permission, choose Name of the desired cluster > Yarn, select Cluster Admin Operations, and click OK.
- In Configure Resource Permission, choose Name of the desired cluster > HDFS > File System > hdfs://hacluster/, and select Read, Write, and Execute for the user. Choose Name of the desired cluster > HDFS > File System > hdfs://hacluster/ > user, select Read, Write, and Execute for mapred.
- Choose System > Permission > User Group > Create User Group to create a user group for the sample project, for example, developgroup.
- Choose System > Permission > User > Create to create a user for the sample project.
- Enter a username, for example, developuser, select a user type and user group to which the user is to be added according to Table 2, bind the role developrole obtain permissions, and click OK.
Table 2 User type and user group list Service
User Type
User Group
HDFS
Machine-Machine
Join the developgroup and supergroup groups.
Set the primary group to supergroup.
MapReduce/Yarn
Machine-Machine
Join the developgroup group.
HBase
Machine-Machine
Join the hadoop group.
Spark2x
Machine-Machine/Human-Machine
Join the developgroup group.
If the user needs to interconnect with Kafka, add the Kafkaadmin user group.
Hive
Machine-Machine/Human-Machine
Join the hive group.
Kafka
Machine-Machine
Join the kafkaadmin group.
Impala
Machine-Machine
Join the impala and supergroup group. Set the primary group to supergroup.
Storm/CQL
Human-Machine
Join the storm group.
ClickHouse
Human-Machine
Join the developgroup and supergroup (primary) groups and add a role with the ClickHouse permission.
Oozie
Human-Machine
Join the hadoop, supergroup, and hive groups
If the multi-instance function is enabled for Hive, the user must belong to a specific Hive instance group, for example, hive3.
GraphBase
Human-Machine
Join the graphbaseadmin, graphbasedeveloper, or graphbaseoperator group.
Flink
Human-Machine
Join the developgroup and hadoop groups.
Set the primary group to developgroup.
NOTE:If a user wants to interconnect with Kafka, a hybrid cluster with Flink and Kafka components is required, or cross-cluster mutual trust needs to be configured for the cluster with Flink and the cluster with Kafka components. Additionally, the created Flink user is added to the kafkaadmin user group.
- If Ranger authentication is enabled for the service, in addition to the permissions of the default user group and role, grant required permissions to the user or its role or user group on the Ranger web UI after the user is created. For details, see Configuring Component Permission Policies.
- On the homepage of FusionInsight Manager, choose System > Permission > User. Select developuser from the user list and click More > Download Authentication Credential to download authentication credentials. Save the downloaded package and decompress the file to obtain user.keytab and krb5.conf files. These files are used for security authentication during the sample project. For details, see the corresponding service development guide.
If the user type is human-machine, you need to change the initial password before downloading the authentication credential file. Otherwise, Password has expired - change password to reset is displayed when you use the authentication credential file. As a result, security authentication fails.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot