Help Center/ MapReduce Service/ Component Operation Guide (Normal)/ Using HDFS/ Configuring the Label Policy (NodeLabel) for HDFS File Directories
Updated on 2025-10-11 GMT+08:00

Configuring the Label Policy (NodeLabel) for HDFS File Directories

Scenario

You need to configure the nodes for storing HDFS file data blocks based on data features. You can specify which DataNodes store data blocks for files by setting a label expression to match an HDFS directory/file and assigning one or more labels to each DataNode.

To place blocks with a label-based strategy, DataNodes that match the file's label expression are selected and then a suitable node among them is chosen.

When adjusting the HDFS data block replication policy, you must:

  • Ensure data reliability and integrity.
  • Minimize cross-rack data transmission to improve transmission efficiency.
  • Balance the load of nodes.
  • Perform sufficient tests to ensure that the custom policy can work properly.

Proper configuration of data block replication policies enables HDFS to better adapt to different application scenarios, improving the performance and reliability of the entire cluster.

  • Scenario 1: DataNode partitioning

    When different application data is required to run on different nodes for separate management, label expressions can be used to achieve separation of different services, storing specified services on corresponding nodes.

    By configuring the NodeLabel feature, you can perform the following operations:

    • Store data in /HBase to DN1, DN2, DN3, and DN4.
    • Store data in /Spark to DN5, DN6, DN7, and DN8.
    Figure 1 DataNode partitioning scenario
    • Run the hdfs nodelabel -setLabelExpression -expression 'LabelA[fallback=NONE]' -path /Hbase command to set an expression for the HBase directory. As shown in Figure 1, the data block replicas of files in the /Hbase directory are placed on the nodes labeled with the LabelA, that is, DN1, DN2, DN3, and DN4.

      Similarly, run the hdfs nodelabel -setLabelExpression -expression 'LabelB[fallback=NONE]' -path /Spark command to set an expression for the Spark directory. Data block replicas of files in the /Spark directory can be placed only on nodes labeled with LabelB, that is, DN5, DN6, DN7, and DN8.

    • For details about how to set labels for a data node, see Configuring the Data Block Replication Policy for DataNode Nodes.
    • If a cluster has multiple racks, each label can contain DataNodes of multiple racks to ensure reliability of data block placement.
  • Scenario 2: Specifying replica location when there are multiple racks

    In a heterogeneous cluster, allocate nodes with high reliability for storing important business data. Specify the replica locations using label expressions, and store one replica of file data blocks on a high-reliability node.

    Data blocks in the /data directory have three replicas by default. In this case, at least one replica is stored on a node of RACK1 or RACK2 (nodes of RACK1 and RACK2 are high reliable), and the other two are stored separately on the nodes of RACK3 and RACK4.

    Figure 2 Scenario example
    • Run the hdfs nodelabel -setLabelExpression -expression 'LabelA||LabelB[fallback=NONE],LabelC,LabelD' -path /data command to set an expression for the /data directory.
    • When data is to be written to the /data directory, at least one data block replica is stored on a node labeled with the LabelA or LabelB, and the other two data block replicas are stored separately on the nodes labeled with the LabelC and LabelD.

Notes and Constraints

  • This section applies to MRS 3.x or later.
  • In configuration files, the key and value are separated by equation signs (=), colons (:), and spaces. Therefore, the hostname of the key cannot contain these characters.

Configuring the Data Block Replication Policy for DataNode Nodes

  1. Log in to FusionInsight Manager.

    For details about how to log in to FusionInsight Manager, see Accessing MRS Manager.

  2. Choose Cluster > Services > HDFS > Configurations > All Configurations.
  3. Search for the following parameters and change their values as required.

    Table 1 Parameters

    Parameter

    Description

    Example Value

    dfs.block.replicator.classname

    DataNode replica storage policy of HDFS. To enable the NodeLabe function, set this parameter to org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyWithNodeLabel.

    The options are as follows:

    • org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyWithNodeLabel: Enables the NodeLabe function.
    • org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault: Uses the Hadoop standard replica storage policy.
    • org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyWithRackGroup: Uses the rack group-based storage policy. If this option is selected, you need to set dfs.use.dfs.network.topology to false and net.topology.impl to org.apache.hadoop.net.NetworkTopologyWithRackGroup as well.
    • org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyWithNodeGroup: Uses the node group-based storage policy.
    • org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyWithNonAffinityNodeGroup: Distributes blocks based on the non-affinity node groups. This prevents replicas from being distributed to the same node group.
    • org.apache.hadoop.hdfs.server.blockmanagement.AvailableSpaceBlockPlacementPolicy: Places blocks on nodes with more available space.
    • org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyRackFaultTolerant: Distributes block replicas on different racks to improve the fault tolerance of racks.

    org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyWithNodeLabel

    host2tags

    Used to configure a mapping between a DataNode host and a label.

    • The hostname can be an extended IP address expression (for example, 192.168.1.[1-128] or 192.168.[2-3].[1-128]) or a regular expression starting and ending with a slash (/), for example, /datanode-[123]/ or /datanode-\d{2}/).
    • The tag name cannot contain the following characters := /\.
    • Example configuration:

      Assume there are 20 DataNodes ranging from dn-1 to dn-20 in a cluster and the IP addresses of clusters range from 10.1.120.1 to 10.1.120.20. The value of host2tags can be represented in either of the following methods:

      • Regular expression of the hostname

        /dn-\d/ = label-1 indicates that the labels corresponding to dn-1 to dn-9 are label-1, that is, dn-1 = label-1, dn-2 = label-1, ..., dn-9 = label-1.

        /dn-((1[0-9]$)|(20$))/ = label-2 indicates that the labels corresponding to dn-10 to dn-20 are label-2, that is, dn-10 = label-2, dn-11 = label-2, ...dn-20 = label-2.

      • IP address range expression

        10.1.120.[1-9] = label-1 indicates that the labels corresponding to 10.1.120.1 to 10.1.120.9 are label-1, that is, 10.1.120.1 = label-1, 10.1.120.2 = label-1, ..., and 10.1.120.9 = label-1.

        10.1.120.[10-20] = label-2 indicates that the labels corresponding to 10.1.120.10 to 10.1.120.20 are label-2, that is, 10.1.120.10 = label-2, 10.1.120.11 = label-2, ..., and 10.1.120.20 = label-2.

    • Label-based data block placement policies are applicable to capacity expansion and reduction scenarios.

      A newly added DataNode will be assigned a label if the IP address of the DataNode is within the IP address range in the host2tags configuration item or the hostname of the DataNode matches the hostname regular expression in the host2tags configuration item.

      For example, the value of host2tags is 10.1.120.[1-9] = label-1, but the current cluster has only three DataNodes: 10.1.120.1, 10.1.120.2, and 10.1.120.3. If DataNode 10.1.120.4 is added for capacity expansion, the DataNode is labeled as label-1. If the 10.1.120.3 DataNode is deleted or out of the service, no data block will be allocated to the node.

    -

  4. Click Save. Go to the Instances page and check whether there are instances whose configurations have expired. If yes, select the instances and choose More > Restart Instance. The configurations take effect after the restart.
  5. Then log in to the HDFS client by referring to Using the HDFS Client and run the following command to view the label information of each DataNode:

    hdfs nodelabel -listNodeLabels [-all] [-node <node_name>]
    • -all: displays all label groups, including labels that are not associated with any node. By default, only labels associated with nodes are displayed.
    • -node <name>: views the label groups allocated to a specified node (hostname or IP address).

Setting Label Expressions for HDFS Directories and Files

  • Configuring Labels On Manager
    1. Log in to FusionInsight Manager.

      For details about how to log in to FusionInsight Manager, see Accessing MRS Manager.

    2. Choose Cluster > Services > HDFS > Configurations > All Configurations.
    3. Search for the following parameters and change their values as required.

      Parameter

      Description

      path2expression

      Configures the mapping between HDFS directories and labels.

      • If the configured HDFS directory does not exist, the configuration can succeed. When a directory with the same name is created manually, the configured label mapping relationship will be inherited by the directory within 30 minutes.
      • After a labeled directory is deleted, a new directory with the same name as the deleted one will inherit its mapping within 30 minutes.
    4. Click Save for the configuration to take effect. You do not need to restart the HDFS service.
    5. Run the following command to check whether the directory label takes effect on the HDFS Client by referring to Configuring Labels on the Cluster Client:
      hdfs nodelabel -listLabelExpression -path <path>

      In the preceding command, <path> indicates the HDFS directory to be checked.

  • Configuring Labels on the Cluster Client
    1. Install the client. If the client has been installed, skip this step.

      For example, the installation directory is /opt/client. You need to change it to the actual installation directory.

      For details about how to download and install the cluster client, see Installing an MRS Cluster Client.

    2. Log in to the node where the client is installed as the client installation user.
    3. Go to the client installation directory, for example, /opt/client.
      cd /opt/client
    4. Run the following command to configure environment variables:
      source bigdata_env
    5. If the cluster is in security mode, run the following command to authenticate the user. If the cluster is in normal mode, skip this step.
      kinit Component service user
    6. Run the following command to set the label expression of the directory or file:
      hdfs nodelabel -setLabelExpression <expression> -add <label1,label2,...> 
      hdfs nodelabel -setLabelExpression <expression> -remove <label1,label2,...>
      • <expression>: expression of a node, which supports the following syntax:
        • Hostname/IP address: For example, host1.test.com or 192.168.1.100.
        • Wildcard: Asterisk (*) indicates any character is matched, and question mark (?) indicates a single character is matched. For example, *.test.com.
        • Regular expression: Starts with tilde (~), for example, ~.*\.test\.com.
      • -add: Adds a label to the matched node.
      • -remove: Removes labels from the matched nodes.

      For example, add the hot label to data-node-[1-5].

      hdfs nodelabel -setLabelExpression "data-node-[1-5]" -add hot
  • Configuring Labels Through Java APIs

    To set label expressions through the Java API, create an instance of the NodeLabelFileSystem class and use the instance to invoke the setLabelExpression(String src, String labelExpression). src indicates a directory or file path on HDFS, and labelExpression indicates the label expression.

Block Replica Location Selection

NodeLabel supports different label policies for replicas. The expression label-1,label-2,label-3 indicates that three replicas are respectively placed in DataNodes containing label-1, label-2, and label-3. Different replica policies are separated by commas (,).

If you want to place two replicas in DataNode with label-1, set the expression as follows: label-1[replica=2],label-2,label-3. In this case, if the default number of replicas is 3, two nodes with label-1 and one node with label-2 are selected. If the default number of replicas is 4, two nodes with label-1, one node with label-2, and one node with label-3 are selected. Note that the number of replicas is the same as that of each replica policy from left to right. However, the number of replicas sometimes exceeds the expressions. If the default number of replicas is 5, the extra replica is placed on the last node, that is, the node labeled with label-3.

When the ACLs function is enabled and the user does not have the permission to access the labels used in the expression, the DataNode with the label is not selected for the replica.

Redundant Block Replica Deletion

If the number of block copies exceeds the value of dfs.replication, HDFS deletes redundant block copies to improve resource utilization. dfs.replication indicates the number of file copies allowed. Go to the HDFS service configuration page by referring to Modifying Cluster Service Configuration Parameters, and search for the parameter to view its value.

The deletion rules are as follows:

  • Preferentially delete replicas that do not meet any expression.

    For example: The default number of file replicas is 3.

    The label expression of /test is LA[replica=1],LB[replica=1],LC[replica=1];

    The file replicas of /test are distributed on four nodes (D1 to D4), corresponding to labels (LA to LD).

    D1:LA
    D2:LB
    D3:LC
    D4:LD

    Then, block replicas on node D4 will be deleted.

  • If all replicas meet the expressions, delete the redundant replicas which are beyond the number specified by the expression.

    For example: The default number of file replicas is 3.

    The label expression of /test is LA[replica=1],LB[replica=1],LC[replica=1];

    The file replicas of /test are distributed on the following four nodes, corresponding to the following labels.

    D1:LA
    D2:LA
    D3:LB
    D4:LC

    Then, block replicas on node D1 or D2 will be deleted.

  • If a file owner or group of a file owner cannot access a label, preferentially delete the replica from the DataNode mapped to the label.

Example of label-based block placement policy

Assume that there are six DataNodes, namely, dn-1, dn-2, dn-3, dn-4, dn-5, and dn-6 in a cluster and the corresponding IP address range is 10.1.120.[1-6]. Six directories must be configured with label expressions. The default number of block replicas is 3.

  • The following provides three expressions of the DataNode label in host2labels file. The three expressions have the same function.
    • Regular expression of the host name
      /dn-[1456]/ = label-1,label-2
      /dn-[26]/ = label-1,label-3
      /dn-[3456]/ = label-1,label-4
      /dn-5/ = label-5
    • IP address range expression
      10.1.120.[1-6] = label-1
      10.1.120.1 = label-2
      10.1.120.2 = label-3
      10.1.120.[3-6] = label-4
      10.1.120.[4-6] = label-2
      10.1.120.5 = label-5
      10.1.120.6 = label-3
    • Common host name expression
      /dn-1/ = label-1, label-2
      /dn-2/ = label-1, label-3
      /dn-3/ = label-1, label-4
      /dn-4/ = label-1, label-2, label-4
      /dn-5/ = label-1, label-2, label-4, label-5
      /dn-6/ = label-1, label-2, label-3, label-4
  • The label expressions of the directories are set as follows:
    /dir1 = label-1
    /dir2 = label-1 && label-3
    /dir3 = label-2 || label-4[replica=2]
    /dir4 = (label-2 || label-3) && label-4
    /dir5 = !label-1
    /sdir2.txt = label-1 && label-3[replica=3,fallback=NONE]
    /dir6 = label-4[replica=2],label-2

    For details about how to set label expressions, see Configuring Labels on the Cluster Client.

    The file data block storage locations are as follows:

    • Data blocks of files in the /dir1 directory can be stored on any of the following nodes: dn-1, dn-2, dn-3, dn-4, dn-5, and dn-6.
    • Data blocks of files in the /dir2 directory can be stored on the dn-2 and dn-6 nodes. The default number of block replicas is 3. The expression matches only two DataNodes. The third replica will be stored on one of the remaining nodes in the cluster.
    • Data blocks of files in the /dir3 directory can be stored on any three of the following nodes: dn-1, dn-3, dn-4, dn-5, and dn-6.
    • Data blocks of files in the /dir4 directory can be stored on the dn-4, dn-5, and dn-6 nodes.
    • Data blocks of files in the /dir5 directory do not match any DataNode and will be stored on any three nodes in the cluster, which is the same as the default block selection policy.
    • For the data blocks of the /sdir2.txt file, two replicas are stored on the dn-2 and dn-6 nodes. The left one is not stored in the node because fallback=NONE is enabled.
    • Data blocks of the files in the /dir6 directory are stored on the two nodes with label-4 selected from dn-3, dn-4, dn-5, and dn-6 and another node with label-2. If the specified number of file replicas in the /dir6 directory is more than 3, the extra replicas will be stored on a node with label-2.

Helpful Links