Updated on 2025-10-11 GMT+08:00

Performing Concurrent Operations on HDFS Files

Scenario

MRS provides a tool for concurrently modifying the permissions and access control of files and directories in a cluster. You can use this tool to concurrently set the number of replicas, owners, permissions, and ACL information of all files in a directory.

Notes and Constraints

This section applies to MRS 3.x or later.

Impact on the System

Performing concurrent file modification operations in a cluster has adverse impacts on the cluster performance. Therefore, you are advised to do so when the cluster is idle.

Prerequisites

  • The client containing the HDFS has been installed. For example, the installation directory is /opt/client.
  • Service users of each component are created by the MRS cluster administrator based on service requirements. In security mode, machine-machine users need to download the keytab file. A human-machine user needs to change the password upon the first login. (This operation is not required in normal mode.)

Procedure

  1. Log in to the node where the client is installed as the client installation user.
  2. Run the following command to go to the client installation directory:

    cd /opt/client

  3. Run the following command to configure environment variables:

    source bigdata_env

  4. If the cluster is in security mode, the user executing the DistCp command must belong to the supergroup group and run the following command to perform user authentication. In normal mode, user authentication is not required.

    kinit Component service user

  5. Increase the JVM size of the client to prevent out of memory (OOM).

    1. Run the following command to edit the <Client installation path>/HDFS/component_env file:
      vi <Client installation path>/HDFS/component_env
    2. Change the upper limit of the HDFS client memory by modifying the CLIENT_GC_OPTS parameter. For example, to set the memory upper limit to 1 GB (32 GB is recommended for 100 million files), run the following command:
      CLIENT_GC_OPTS="-Xmx1G"
    3. Run the following command to apply the modifications.

      source <Client installation path>/bigdata_env

  6. Run the concurrent commands shown below.

    • Sets the number of replicas of all files in a directory.
      hdfs quickcmds [-t threadsNumber] [-p principal] [-k keytab] -setrep <rep> <path> ...
    • Set the owners of all files in a directory.
      hdfs quickcmds [-t threadsNumber] [-p principal] [-k keytab] -chown [owner][:[group]] <path> ...
    • Set permissions for all files in a directory.
      hdfs quickcmds [-t threadsNumber] [-p principal] [-k keytab] -chmod <mode> <path> ...
    • Set ACL information for all files in a directory.
      hdfs quickcmds [-t threadsNumber] [-p principal] [-k keytab] -setfacl [{-b|-k} {-m|-x <acl_spec>} <path> ...]|[--set <acl_spec> <path> ...]

    The parameters in the preceding commands are described as follows:

    • threadsNumber indicates the number of concurrent threads. The default value is the number of vCPUs of the local host.
    • principal indicates the Kerberos user.
    • keytab indicates the Keytab file.
    • rep indicates the number of replicas.
    • owner indicates the owner.
    • group indicates the group to which the user belongs.
    • mode indicates the permission (for example, 754).
    • acl_spec indicates the ACLs separated by commas (,).
    • path indicates the HDFS directory.

Common Issues

What should I do when the HDFS client exits abnormally and the error message "java.lang.OutOfMemoryError" is displayed after an HDFS client command is executed? This error occurs because the memory required for running the HDFS client exceeds the preset upper limit (128 MB by default). You can modify the memory upper limit of the HDFS client by referring to 5.