Updated on 2022-12-14 GMT+08:00

Changing NodeManager Storage Directories

Scenario

If the storage directories defined by YARN NodeManager are incorrect or the YARN storage plan changes, the MRS cluster administrator needs to modify the NodeManager storage directories on FusionInsight Manager to ensure smooth YARN running. The storage directories of NodeManager include the local storage directory yarn.nodemanager.local-dirs and log directory yarn.nodemanager.log-dirs. Changing the ZooKeeper storage directory includes the following scenarios:

  • Change the storage directory of the NodeManager role. In this way, the storage directories of all NodeManager instances are changed.
  • Change the storage directory of a single NodeManager instance. In this way, only the storage directory of this instance is changed, and the storage directories of other instances remain the same.

Impact on the System

  • The cluster needs to stopped and restarted during the process of changing the storage directory of the NodeManager role, and the cluster cannot provide services before started.
  • The NodeManager instance needs to stopped and restarted during the process of changing the storage directory of the instance, and the instance at this node cannot provide services before it is started.
  • The directory for storing service parameter configurations must also be updated.
  • After the storage directories of NodeManager are changed, you need to download and install the client again.

Prerequisites

  • New disks have been prepared and installed on each data node, and the disks are formatted.
  • New directories have been planned for storing data in the original directories.
  • The MRS cluster administrator user admin has been prepared.

Procedure

For versions earlier than MRS 2.0.1, perform the following steps:

  1. Check the environment.

    1. Log in to MRS Manager and click the cluster name. Choose Services and check whether health status of Yarn is Good.
      • If yes, go to 1.c.
      • If no, go to 1.b.
    2. Rectify the Yarn fault. No further action is required.
    3. Determine whether to change the storage directory of the NodeManager role or that of a single NodeManager instance:
      • To change the storage directory of the NodeManager role, go to 2.
      • To change the storage directory of a single NodeManager instance, go to 3.

  2. Change the storage directory of the NodeManager role.

    1. Click the cluster name and choose Services > Yarn > Stop to stop the Yarn service.
    2. Log in as user root to each node on which the Yarn service is installed, and perform the following operations:
      1. Create a target directory.

        For example, to create the target directory ${BIGDATA_DATA_HOME}/data2, run the following command:

        mkdir ${BIGDATA_DATA_HOME}/data2

      2. Mount the target directory to the new disk.

        For example, mount ${BIGDATA_DATA_HOME}/data2 to the new disk.

      3. Modify permissions on the new directory.

        For example, to modify permissions on the ${BIGDATA_DATA_HOME}/data2 directory, run the following commands:

        chmod 750 ${BIGDATA_DATA_HOME}/data2 -R and chown omm:wheel ${BIGDATA_DATA_HOME}/data2 -R

    3. On MRS Manager, click the cluster name. Choose Services > Yarn > Instance. Select the NodeManager instance of the corresponding host. Choose Instance Configuration > All Configurations.

      Change the value of yarn.nodemanager.local-dirs or yarn.nodemanager.log-dirs to the new target directory.

      For example, change the value of yarn.nodemanager.local-dirs or yarn.nodemanager.log-dirs to /srv/BigData/data2/nm/containerlogs.

    4. Click Save Configuration, select Restart the affected services or instances, and click OK Restart the Yarn service.

      Click Finish when the system displays "Operation successful". Yarn is successfully started. No further action is required.

  3. Change the storage directory of a single NodeManager instance.

    1. Click the cluster name. Choose Services > Yarn > Instance. Select the NodeManager instance whose storage directory needs to be modified, and choose More > Stop Instance.
    2. Log in to the NodeManager node as user root and perform the following operations:
      1. Create a target directory.

        For example, to create the target directory ${BIGDATA_DATA_HOME}/data2, run the following command:

        mkdir ${BIGDATA_DATA_HOME}/data2

      2. Mount the target directory to the new disk.

        For example, mount ${BIGDATA_DATA_HOME}/data2 to the new disk.

      3. Modify permissions on the new directory.

        For example, to modify permissions on the ${BIGDATA_DATA_HOME}/data2 directory, run the following commands:

        chmod 750 ${BIGDATA_DATA_HOME}/data2 -R and chown omm:wheel ${BIGDATA_DATA_HOME}/data2 -R

    3. On MRS Manager, click the specified NodeManager instance and switch to the Instance Configuration tab page.

      Change the value of yarn.nodemanager.local-dirs or yarn.nodemanager.log-dirs to the new target directory.

      For example, change the value of yarn.nodemanager.local-dirs or yarn.nodemanager.log-dirs to /srv/BigData/data2/nm/containerlogs.

    4. Click Save Configuration and select Restart the affected services or instances. Click OK to restart the NodeManager instance.

      Click Finish when the system displays "Operation successful". The NodeManager instance is successfully started.

For versions earlier than MRS 3.x, perform the following operations:

  1. Check the environment.

    1. Log in to the MRS console. In the left navigation pane, choose Clusters > Active Clusters, and click a cluster name. Choose Components and check whether health status of Yarn is Good.
      • If yes, go to 1.c.
      • If no, the Yarn status is unhealthy. Go to 1.b.
    2. Rectify the Yarn fault. No further action is required.
    3. Determine whether to change the storage directory of the NodeManager role or that of a single NodeManager instance:
      • To change the storage directory of the NodeManager role, go to 2.
      • To change the storage directory of a single NodeManager instance, go to 3.

  2. Change the storage directory of the NodeManager role.

    1. Choose Clusters > Active Clusters, and click a cluster name. Choose Components > Yarn > Stop to stop the Yarn service.
    2. Log in to the ECS server and go to each node where Yarn is installed as user root. Perform the following operations:
      1. Create a target directory.

        For example, to create the target directory ${BIGDATA_DATA_HOME}/data2, run the following command:

        mkdir ${BIGDATA_DATA_HOME}/data2

      2. Mount the target directory to the new disk.

        For example, mount ${BIGDATA_DATA_HOME}/data2 to the new disk.

      3. Modify permissions on the new directory.

        For example, to modify permissions on the ${BIGDATA_DATA_HOME}/data2 directory, run the following commands:

        chmod 750 ${BIGDATA_DATA_HOME}/data2 -R and chown omm:wheel ${BIGDATA_DATA_HOME}/data2 -R

    3. On the MRS console, choose Clusters > Active Clusters and click a cluster name. Choose Components > Yarn > Instances. Select the NodeManager instance of the corresponding host. Choose Instance Configuration > All Configurations.

      Change the value of yarn.nodemanager.local-dirs or yarn.nodemanager.log-dirs to the new target directory.

      For example, change the value of yarn.nodemanager.local-dirs or yarn.nodemanager.log-dirs to /srv/BigData/data2/nm/containerlogs.

    4. Click Save Configuration, select Restart the affected services or instances, and click OK Restart the Yarn service.

      Click Finish when the system displays "Operation successful". Yarn is successfully started. No further action is required.

  3. Change the storage directory of a single NodeManager instance.

    1. Choose Clusters > Active Clusters, and click a cluster name. Choose Components > Yarn > Instances. Select the NodeManager instance whose storage directory needs to be modified, and choose More > Stop Instance.
    2. Log in to the ECS and go to the NodeManager node as user root. Perform the following operations:
      1. Create a target directory.

        For example, to create the target directory ${BIGDATA_DATA_HOME}/data2, run the following command:

        mkdir ${BIGDATA_DATA_HOME}/data2

      2. Mount the target directory to the new disk.

        For example, mount ${BIGDATA_DATA_HOME}/data2 to the new disk.

      3. Modify permissions on the new directory.

        For example, to modify permissions on the ${BIGDATA_DATA_HOME}/data2 directory, run the following commands:

        chmod 750 ${BIGDATA_DATA_HOME}/data2 -R and chown omm:wheel ${BIGDATA_DATA_HOME}/data2 -R

    3. On the MRS console, click the specified NodeManager instance and switch to the Instance Configuration tab page.

      Change the value of yarn.nodemanager.local-dirs or yarn.nodemanager.log-dirs to the new target directory.

      For example, change the value of yarn.nodemanager.local-dirs or yarn.nodemanager.log-dirs to /srv/BigData/data2/nm/containerlogs.

    4. Click Save Configuration and select Restart the affected services or instances. Click OK to restart the NodeManager instance.

      Click Finish when the system displays "Operation successful". The NodeManager instance is successfully started.

For MRS 3.x or later, perform the following operations:

  1. Check the environment.

    1. Log in to Manager, choose Cluster > Name of the desired cluster > Service to check whether Running Status of Yarn is Normal.
      • If yes, go to 1.c.
      • If no, the Yarn status is unhealthy. In this case, go to 1.b.
    2. Rectify faults of Yarn. No further action is required.
    3. Determine whether to change the storage directory of the NodeManager role or that of a single NodeManager instance:
      • To change the storage directory of the NodeManager role, go to 2.
      • To change the storage directory of a single NodeManager instance, go to 3.

  2. Change the storage directory of the NodeManager role.

    1. Choose Cluster > Name of the desired cluster > Service > Yarn > Stop to stop the Yarn service.
    2. Log in to each data node where the Yarn service is installed as user root and perform the following operations:
      1. Create a target directory.

        For example, to create the target directory ${BIGDATA_DATA_HOME}/data2, run the following command:

        mkdir ${BIGDATA_DATA_HOME}/data2

      2. Mount the target directory to the new disk.

        For example, mount ${BIGDATA_DATA_HOME}/data2 to the new disk.

      3. Modify permissions on the new directory.

        For example, to modify permissions on the ${BIGDATA_DATA_HOME}/data2 directory, run the following commands:

        chmod 750 ${BIGDATA_DATA_HOME}/data2 -R and chown omm:wheel ${BIGDATA_DATA_HOME}/data2 -R

    3. On the Manager portal, choose Cluster > Name of the desired cluster > Services > Yarn > Instance. Select the NodeManager instance of the corresponding host, click Instance Configuration, and select All Configurations.

      Change the value of yarn.nodemanager.local-dirs or yarn.nodemanager.log-dirs to the new target directory.

      For example, change the value of yarn.nodemanager.local-dirs or yarn.nodemanager.log-dirs to /srv/BigData/data2/nm/containerlogs.

    4. Click Save, and then click OK. Restart the Yarn service.

      Click Finish when the system displays "Operation successful". Yarn is successfully started. No further action is required.

  3. Change the storage directory of a single NodeManager instance.

    1. Choose Cluster > Name of the desired cluster > Service > Yarn > Instance, select the NodeManager instance whose storage directory needs to be modified, and choose More > Stop.
    2. Log in to the NodeManager node as user root, and perform the following operations:
      1. Create a target directory.

        For example, to create the target directory ${BIGDATA_DATA_HOME}/data2, run the following command:

        mkdir ${BIGDATA_DATA_HOME}/data2

      2. Mount the target directory to the new disk.

        For example, mount ${BIGDATA_DATA_HOME}/data2 to the new disk.

      3. Modify permissions on the new directory.

        For example, to modify permissions on the ${BIGDATA_DATA_HOME}/data2 directory, run the following commands:

        chmod 750 ${BIGDATA_DATA_HOME}/data2 -R and chown omm:wheel ${BIGDATA_DATA_HOME}/data2 -R

    3. On Manager, click the specified NodeManager instance, and switch to the Instance Configuration page.

      Change the value of yarn.nodemanager.local-dirs or yarn.nodemanager.log-dirs to the new target directory.

      For example, change the value of yarn.nodemanager.local-dirs or yarn.nodemanager.log-dirs to /srv/BigData/data2/nm/containerlogs.

    4. Click Save, and then click OK to restart the NodeManager instance.

      Click Finish when the system displays "Operation successful". The NodeManager instance is successfully started.