Preparing a Development User
Prerequisites
Kerberos authentication has been enabled for the MRS cluster. Skip this step if Kerberos authentication is not enabled for the cluster.
Scenario
The development user is used to run the sample project. The user must have HDFS, Yarn, and Hive permissions to run the Spark sample project.
Procedure
- Log in to MRS Manager. For details, see Login to MRS Manager.
- On MRS Manager, choose System > Manage Role > Create Role.
- Enter a role name, for example, sparkrole.
- In Permission, choose HBase > HBase Scope > global. Select Create for default.
- In Permission, choose HBase > HBase Scope > global > hbase. Select Execute for hbase:meta.
- Modify the role. In Permission, choose HDFS > File System, select Read, Write, and Execute.
- In Permission, select HDFS > File System > hdfs://hacluster/ > user > hive, and select Execute.
- In Permission, choose HDFS > File System > hdfs://hacluster/ > user > hive > warehouse, and select Read, Write, and Execute.
- In Permission, choose Hive > Hive Read Write Privileges and select Create for default.
- In Permission, choose Yarn > Scheduler Queue > root, and select Submit for default.
- Click OK.
- On MRS Manager, choose System > Manage User > Create User to create a user for the sample project. Enter a username, for example, sparkuser. Set User Type to Machine-machine, and select both supergroup and kafkaadmin in User Group. Set Primary Group to supergroup, select the sparkrole role to obtain permissions, and click OK.
Users using the Spark Streaming program need the kafkaadmin group permission to operate the Kafka component.
- On MRS Manager, choose System > Manage User and select sparkuser. Click to download an authentication credential file. Save the file and decompress it to obtain the keytab and krb5.conf files. They are used for security authentication in the sample project. For details how to use them, see Preparing the Authentication Mechanism Code.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.