Using the HDFS Client
Scenario
This section describes how to perform operations, such as reading and writing files, on HDFS using the HDFS client in O&M or service scenarios.
Prerequisites
- You have installed the client.
For example, the installation directory is /opt/client. The client directory in the following operations is only an example. Change it based on the actual installation directory onsite.
- Service users of each component are created by the MRS cluster administrator based on service requirements. In security mode, machine-machine users need to download the keytab file. A human-machine user needs to change the password upon the first login. (This operation is not required in normal mode.)
Using the HDFS Client
- Install the client. If the client has been installed, skip this step.
For example, the installation directory is /opt/client. You need to change it to the actual installation directory.
For details about how to download and install the cluster client, see Installing an MRS Cluster Client.
- Log in to the node where the client is installed as the client installation user.
- Go to the client installation directory, for example, /opt/client.
cd /opt/client
- Run the following command to configure environment variables:
source bigdata_env
- If the cluster is in security mode, run the following command to authenticate the user. If the cluster is in normal mode, skip this step.
kinit Component service user
- Run the HDFS Shell command. Example:
hdfs dfs -ls /
Common HDFS Client Commands
- Common HDFS Client Commands
Table 1 describes the common HDFS client commands.
Table 1 Common HDFS client commands Operation
Command
Description
Creating a folder
hdfs dfs -mkdir Folder name
Create the /tmp/mydir folder.
hdfs dfs -mkdir /tmp/mydir
Viewing a folder
hdfs dfs -ls Folder name
hdfs dfs -ls /tmp
Uploading a local file to a specified HDFS path
hdfs dfs -put Local file on the client node Specified HDFS path
Upload the /opt/test.txt file on the client node to the /tmp directory of HDFS.
hdfs dfs -put /opt/test.txt /tmp
Downloading the HDFS file to the specified local path
hdfs dfs -get Specified file on HDFS Specified path on the client node
Download the /tmp/test.txt file on HDFS to the /opt directory on the client node.
hdfs dfs -get /tmp/test.txt /opt/
Deleting a folder
hdfs dfs -rm -r -f Specified folder on HDFS
Delete the /tmp/mydir folder.
hdfs dfs -rm -r -f /tmp/mydir
Configuring the HDFS directory permission for a user
hdfs dfs -chmod Permission parameter File directory
Configure the 700 permission of the /tmp/test directory.
hdfs dfs -chmod 700 /tmp/test
- Transparent Encryption Commands
Table 2 Transparent encryption commands Operation
Command
Description
Creating a key
hadoop key create<keyname> [-cipher <cipher>] [-size <size>] [-description <description>] [-attr <attribute=value>] [-provider <provider>] [-help]
The create subcommand creates a new key for the name specified by <keyname> in provider. The provider is specified by the -provider parameter. You can use the -cipher parameter to define a password. The default password is AES/CTR/NoPadding.
The default key length is 128. You can use the -size parameter to define a key length. Any attribute of the attribute=value type can be defined by the -attr parameter. -attr can be defined many times for each attribute.
Performing a rollback
hadoop key roll<keyname> [-provider <provider>] [-help]
The roll subcommand creates a new version for the key specified in provider. The provider is specified by the -provider parameter.
Deleting a key
hadoop key delete<keyname> [-provider <provider>] [-f] [-help]
The delete subcommand deletes all versions of a key. The key is specified by the <keyname> parameter in provider, and the provider is specified by the -provider parameter. The command needs to be confirmed unless -f is specified.
Viewing a key
hadoop key list[-provider <provider>] [-metadata] [-help]
The list subcommand displays all key names in provider. The provider is configured in the core-site.xml or specified by the -provider parameter. The -metadata parameter displays metadata.
- Shell Commands of the Colocation Client
Table 3 Shell commands of the Colocation client Operation
Command
Description
Creating a user group
hdfs colocationadmin -createGroup -groupId
<groupID> -locatorIds <comma separated locatorIDs> or -file <path of the file contains all of locatorIDs>
Used to create a group. In the command, groupID indicates the group name and locatorID indicates the locator name. You can enter locator IDs through CLI. Multiple locator IDs are separated by commas (,). Locator IDs can be written into a file so that the system can obtain locator IDs by reading the file.
Deleting a group
hdfs colocationadmin -deleteGroup <groupID>
Used to delete the specified group.
Querying a group
hdfs colocationadmin -queryGroup <groupID>
Used to query details about a specified group, including locators in the group and information about each locator and their DataNodes.
Viewing all groups
hdfs colocationadmin -listGroups
Used to list all groups and their creation time.
Setting ACL permission of Colocation root directory
hdfs colocationadmin -setAcl
Used to set the ACL permission of the Colocation root directory in ZooKeeper.
The default root directory of Colocation in ZooKeeper is /hadoop/colocationDetails.
Helpful Links
- The following lists more common problems about using the HDFS client:
- For more HDFS client commands, see User_Commands and FileSystemShell.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot