Creating and Using an MRS Cluster Requiring Security Authentication
Scenario
This topic helps you create a Hadoop analysis cluster that requires Kerberos authentication and submit a wordcount job through the cluster client. A wordcount job is a classic Hadoop job that counts words in massive amounts of text.
The Hadoop cluster uses many open-source Hadoop ecosystem components, including YARN for cluster resource management and Hive and Spark for offline large-scale distributed data storage and compute to provide massive data analysis and query capabilities.
Procedure
Before you start, complete operations described in Preparations. Then, follow these steps:
- Creating an MRS Cluster: Create a Hadoop analysis cluster of MRS 3.2.0-LTS.1 that requires Kerberos authentication.
- Creating a Cluster User: Create a role that has the permission to submit the wordcount job and bind the role to a user on FusionInsight Manager.
- Installing the Cluster Client: Download and install the MRS cluster client.
- Preparing Applications and Data: Prepare the data files required for running the wordcount sample program on the MRS cluster client.
- Submitting a Job and Viewing the Result: Submit a wordcount data analysis job on the cluster client and view the execution result.
Preparations
- Register an account and perform real-name authentication.
Before creating an MRS cluster, sign up for a HUAWEI ID and enable Huawei Cloud services and perform real-name authentication.
If you have enabled Huawei Cloud services and completed real-name authentication, skip this step.
- You have prepared an IAM user who has the permission to create MRS clusters. For details, see Creating an MRS User.
Step 1: Creating an MRS Cluster
- Go to the Buy Cluster page.
- Search for MapReduce Service in the service list and enter the MRS console.
- Click Buy Cluster. The Quick Config tab is displayed.
- Configure the cluster as you need. In this example, a pay-per-use MRS 3.2.0-LTS.1 cluster will be created. For more details about how to configure the parameters, see Quickly Creating a Cluster.
Table 1 MRS cluster parameters Parameter
Description
Example Value
Billing Mode
Billing mode of the cluster you want to create. MRS provides two billing modes: yearly/monthly and pay-per-use.
Pay-per-use is a postpaid billing mode. You pay as you go and pay for what you use. The cluster usage is calculated by the second but billed every hour.
Pay-per-use
Region
Region where the MRS resources to be requested belong.
MRS clusters in different regions cannot communicate with each other over an intranet. For lower network latency and quick resource access, select the nearest region.
CN-Hong Kong
Cluster Name
Name of the MRS cluster you want to create.
mrs_demo
Cluster Type
A range of clusters that accommodate diverse big data demands. You can select a Custom cluster to run a wide range of analytics components supported by MRS.
Custom
Version Type
Service type of the MRS
Normal
Cluster Version
Version of the MRS cluster. Supported open-source components and their functions vary depending on the cluster version. You are advised to select the latest version.
MRS 3.2.0-LTS.1
Component
Cluster templates containing preset opensource components you will need for your business.
Hadoop Analysis Cluster
AZ
Available AZ associated with the cluster region.
AZ1
VPC
VPC where you want to create the cluster. You can click View VPC to view the name and ID. If no VPC is available, create one.
vpc-default
Subnet
Subnet where your cluster belongs. You can access the VPC management console to view the names and IDs of existing subnets in the VPC. If no subnet is created under the VPC, click Create Subnet to create one.
subnet-default
Cluster Node
Cluster node details.
Default value
Kerberos Authentication
Whether Kerberos authentication is enabled.
Enabled
Username
Username for logging in to the cluster management page and the ECS node.
admin/root
Password
User password for logging in to the cluster management page and the ECS node.
-
Confirm Password
Enter the user password again.
-
Enterprise Project
Enterprise project to which the cluster belongs.
default
Secure Communications
Select the check box to agree to use the access control rules.
Selected
Figure 1 Buying a Hadoop analysis cluster
- Click Buy Now. A page is displayed showing that the task has been submitted.
- Click Back to Cluster List. You can view the status of the newly created cluster on the Active Clusters page.
Wait for the cluster creation to complete. The initial status of the cluster is Starting. After the cluster is created, the cluster status becomes Running.
Step 2: Creating a Cluster User
For clusters with Kerberos authentication enabled, perform the following steps to create a user and grant permissions to the user to execute programs.
- Click the MRS cluster name in the cluster list to go to the dashboard page.
- Click Access Manager next to MRS Manager. In the displayed dialog box, select EIP and configure the EIP information.
For the first access, click Manage EIPs to purchase an EIP on the EIP console. Go back to the Access MRS Manager dialog box, refresh the EIP list, and select the EIP.
- Select the confirmation check box and click OK to log in to the FusionInsight Manager of the cluster.
The username for logging in to FusionInsight Manager is admin, and the password is the one configured during cluster purchase.
- Click System in the navigation pane on the top, and click Permission > Role.
- Click Create Role and set the following parameters. For details, see Creating a Role.
- Enter a role name, for example, mrrole.
- For Configure Resource Permission, select the cluster to be operated, choose Yarn > Scheduler Queue > root, and select Submit and Admin in the Permission column. Click the name of the target cluster in the path and then configure other permissions.
Figure 2 Configuring resource permissions for YARN
- Choose HDFS > File System > hdfs://hacluster/. Locate the row that contains user, select Read, Write, and Execute in the Permission column, and click OK.
Figure 3 Configuring resource permissions for HDFS
- Click User in the navigation pane on the left, and then click Create on the displayed page. Set the following parameters. For details, see Creating a User.
- Enter a username, for example, test.
- Set User Type to Human-Machine.
- Enter the password in Password and enter it again in Confirm Password.
- Bind Manager_viewer to the mrrole role created in 5 to grant permissions.
Figure 4 Creating a user
- Click OK.
Step 3: Installing the Cluster Client
You need to install a cluster client to connect to component services in the cluster and submit jobs.
You can install the clients on a node in or outside the cluster. This topic installs the client on the Master1 node as an example.
- Click the MRS cluster name in the cluster list to go to the dashboard page.
- Click Access Manager next to MRS Manager. In the displayed dialog box, select EIP and configure the EIP information.
For the first access, click Manage EIPs to purchase an EIP on the EIP console. Go back to the Access MRS Manager dialog box, refresh the EIP list, and select the EIP.
- Select the confirmation check box and click OK to log in to the FusionInsight Manager of the cluster.
The username for logging in to FusionInsight Manager is admin, and the password is the one configured during cluster purchase.
- On the displayed Homepage page, click next to the cluster name and click Download Client to download the cluster client.
Figure 5 Downloading the client
In the Download Cluster Client dialog box, set the following parameters:
- Set Select Client Type to Complete Client.
- Retain the default value for Platform Type, for example, x86_64.
- Retain the default path for Save to Path. The generated file will be saved in the /tmp/FusionInsight-Client directory on the active OMS node of the cluster.
Figure 6 Downloading the cluster client
Click OK and wait until the client software is generated.
- Go back to the MRS console and click the cluster name in the cluster list. Go to the Nodes tab, click the name of the node that contains master1. In the upper right corner of the ECS details page, click Remote Login to log in to the Master1 node.
Figure 7 Checking the Master1 node
- Log in to the Master1 node as user root. The password is the one you set for the root user during cluster purchase.
- Switch to the directory where the client software package is stored and decompress the package.
cd /tmp/FusionInsight-Client/
tar -xvf FusionInsight_Cluster_1_Services_Client.tar
tar -xvf FusionInsight_Cluster_1_Services_ClientConfig.tar
- Go to the directory where the installation package is stored and install the client.
cd FusionInsight_Cluster_1_Services_ClientConfig
Install the client to a specified directory (an absolute path), for example, /opt/client.
./install.sh /opt/client
... ... component client is installed successfully ...
A client installation directory will be automatically created if it does not exist. If there is such directory, it must be empty. The specified client installation directory can contain only uppercase letters, lowercase letters, digits, and underscores (_), and cannot contain space.
Step 4: Preparing Applications and Data
You can run the wordcount sample program preset in the cluster client on the created cluster, or develop a big data application and upload it to the cluster. This topic uses the wordcount sample program preset in the cluster client as an example. You need to prepare the data files required for running the wordcount sample program.
- Log in to the Master1 node as user root.
- Prepare data files.
There is no format requirement. For example, the file names are wordcount1.txt and wordcount2.txt, and the content is as follows:
vi /opt/wordcount1.txt
hello word hello wordcount
vi /opt/wordcount2.txt
hello mapreduce hello hadoop
- Switch to the client installation directory, configure environment variables, and create an HDFS directory for storing sample data, for example, /user/example/input.
cd /opt/client
source bigdata_env
kinit test (test is the username created in 6. Change the password upon first login.)
hdfs dfs -mkdir /user/example
hdfs dfs -mkdir /user/example/input
The test user created in 6 has only the read, write, and execute permissions on the /user directory. If the input directory is created in a directory other than /user, an error message is displayed, indicating that the permission is required. The following is an example:
hdfs dfs -mkdir /hbase/input
The following error message is displayed:
mkdir: Permission denied: user=test, access=EXECUTE, inode="/hbase":hbase:hadoop:drwxrwx--T
- Upload the sample data to HDFS.
hdfs dfs -put /opt/wordcount1.txt /user/example/input
hdfs dfs -put /opt/wordcount2.txt /user/example/input
Step 5: Submitting a Job and Viewing the Result
- Log in to the client node (Master1) as user root.
- Submit the wordcount job, read source data for analysis, and output the execution result to the HDFS.
cd /opt/client
source bigdata_env
kinit test
hadoop jar HDFS/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.1-*.jar wordcount "/user/example/input/*" "/user/example/output/"
... File Input Format Counters Bytes Read=56 File Output Format Counters Bytes Written=48
- /user/example/output/ indicates the address for storing job output files on the HDFS. Set it to a directory that does not exist.
- The name of the hadoop-mapreduce-examples-3.3.1-*.jar file varies depending on the cluster client version. Use the actual name.
- Query job execution results.
- View the job output file.
hdfs dfs -ls /user/example/output/
... ... /user/example/output/_SUCCESS ... /user/example/output/part-r-0000
- Save the output in the HDFS file system. You can run a command to download the output to the local PC and view it.
hdfs dfs -get /user/example/output/part-r-00000 /opt
cat /opt/part-r-00000
The content of the part-r-00000 file is as follows:
hadoop 1 hello 4 mapreduce 1 word 1 wordcount 1
- View the job output file.
- View job run logs.
- Log in to FusionInsight Manager of the cluster as user test created in 6 and choose Cluster > Services > Yarn.
- Click the ResourceManager(xxx,Active) link in the row where ResourceManager Web UI is.
- On the All Applications page, click the ID of the target job to view the job details.
On the All Applications page, you can confirm a task based on the task submission time and the user name that submits the task.
Figure 8 Viewing job details
Related Information
Hadoop components include HDFS, YARN, and MapReduce. You can run jobs to analyze or view offline data. For details, see Using HDFS, Using MapReduce, or Using YARN.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot