Updated on 2024-11-29 GMT+08:00

Changing NodeManager Storage Directories

Scenario

If the storage directories defined by YARN NodeManager are incorrect or the YARN storage plan changes, the MRS cluster administrator needs to modify the NodeManager storage directories on FusionInsight Manager to ensure smooth YARN running. The storage directories of NodeManager include the local storage directory yarn.nodemanager.local-dirs and log directory yarn.nodemanager.log-dirs. Changing the ZooKeeper storage directory includes the following scenarios:

  • Change the storage directory of the NodeManager role. In this way, the storage directories of all NodeManager instances are changed.
  • Change the storage directory of a single NodeManager instance. In this way, only the storage directory of this instance is changed, and the storage directories of other instances remain the same.

Impact on the System

  • The cluster needs to stopped and restarted during the process of changing the storage directory of the NodeManager role, and the cluster cannot provide services before started.
  • The NodeManager instance needs to stopped and restarted during the process of changing the storage directory of the instance, and the instance at this node cannot provide services before it is started.
  • The directory for storing service parameter configurations must also be updated.
  • After the storage directories of NodeManager are changed, you need to download and install the client again.

Prerequisites

  • New disks have been prepared and installed on each data node, and the disks are formatted.
  • New directories have been planned for storing data in the original directories.
  • The MRS cluster administrator user admin has been prepared.

Procedure

  1. Check the environment.

    1. Log in to FusionInsight Manager, choose Cluster > Services, and check whether Running Status of Yarn is Normal.
      • If yes, go to 1.c.
      • If no, the Yarn status is unhealthy. In this case, go to 1.b.
    2. Rectify faults of Yarn. No further action is required.
    3. Determine whether to change the storage directory of the NodeManager role or that of a single NodeManager instance:
      • To change the storage directory of the NodeManager role, go to 2.
      • To change the storage directory of a single NodeManager instance, go to 3.

  2. Change the storage directory of the NodeManager role.

    1. Choose Cluster > Services > Yarn and click Stop Service to stop the Yarn service.
    2. Log in to each data node where the Yarn service is installed as user root and perform the following operations:
      1. Create a target directory.

        For example, to create the target directory ${BIGDATA_DATA_HOME}/data2, run the following command:

        mkdir ${BIGDATA_DATA_HOME}/data2

      2. Mount the target directory to the new disk.

        For example, mount ${BIGDATA_DATA_HOME}/data2 to the new disk.

      3. Modify permissions on the new directory.

        For example, to modify permissions on the ${BIGDATA_DATA_HOME}/data2 directory, run the following commands:

        chmod 750 ${BIGDATA_DATA_HOME}/data2 -R and chown omm:wheel ${BIGDATA_DATA_HOME}/data2 -R

    3. On FusionInsight Manager, choose Cluster > Services > Yarn. Click Instance, select the NodeManager instance of the corresponding host, click Instance Configuration, and select All Configurations.

      Change the value of yarn.nodemanager.local-dirs or yarn.nodemanager.log-dirs to the new target directory.

      For example, change the value of yarn.nodemanager.local-dirs or yarn.nodemanager.log-dirs to /srv/BigData/data2/nm/containerlogs.

    4. Click Save, and then click OK. Restart the Yarn service.

      Click Finish when the system displays "Operation successful". Yarn is successfully started. No further action is required.

  3. Change the storage directory of a single NodeManager instance.

    1. Choose Cluster > Services > Yarn and click Instance. Select the NodeManager instance whose storage directory needs to be modified, click More, and select Stop Instance.
    2. Log in to the NodeManager node as user root, and perform the following operations:
      1. Create a target directory.

        For example, to create the target directory ${BIGDATA_DATA_HOME}/data2, run the following command:

        mkdir ${BIGDATA_DATA_HOME}/data2

      2. Mount the target directory to the new disk.

        For example, mount ${BIGDATA_DATA_HOME}/data2 to the new disk.

      3. Modify permissions on the new directory.

        For example, to modify permissions on the ${BIGDATA_DATA_HOME}/data2 directory, run the following commands:

        chmod 750 ${BIGDATA_DATA_HOME}/data2 -R and chown omm:wheel ${BIGDATA_DATA_HOME}/data2 -R

    3. On Manager, click the specified NodeManager instance, and switch to the Instance Configuration page.

      Change the value of yarn.nodemanager.local-dirs or yarn.nodemanager.log-dirs to the new target directory.

      For example, change the value of yarn.nodemanager.local-dirs or yarn.nodemanager.log-dirs to /srv/BigData/data2/nm/containerlogs.

    4. Click Save, and then click OK to restart the NodeManager instance.

      Click Finish when the system displays "Operation successful". The NodeManager instance is successfully started.