Help Center/ MapReduce Service/ Component Operation Guide (Normal)/ Using YARN/ Configuring Resources for a NodeManager Role Instance
Updated on 2023-01-11 GMT+08:00

Configuring Resources for a NodeManager Role Instance

Scenario

If the hardware resources (such as the number of CPU cores and memory size) of the nodes for deploying NodeManagers are different but the NodeManager available hardware resources are set to the same value, the resources may be wasted or the status may be abnormal. You need to change the hardware resource configuration for each NodeManager to ensure that the hardware resources can be fully utilized.

Impact on the System

NodeManager role instances must be restarted for the new configuration to take effect, and the role instances are unavailable during restart.

Prerequisites

  • For versions earlier than MRS 3.x: You have logged in to the MRS management console.
  • Clusters of MRS 3.x or later: You have logged in to Manager.

Procedure

For versions earlier than MRS 3.x, perform the following operations:

  1. Choose Clusters > Active Clusters, and click a cluster name. Choose Components > Yarn > Instances.
  2. Click NodeManager in the Role column and go to the Instance Configuration tab page. Select All from the Basic drop-down list, and search for the required parameters.
  3. Enter yarn.nodemanager.resource.cpu-vcores in the search box, and set the number of vCPUs that can be used by NodeManager on the current node. You are advised to set this parameter to 1.5 to 2 times the number of actual logical CPUs on the node. Enter yarn.nodemanager.resource.memory-mb in the search box, and set the physical memory size that can be used by NodeManager on the current node. You are advised to set this parameter to 75% to 90% of the actual physical memory size of the node.

    Enter yarn.scheduler.maximum-allocation-vcores in the search box, and set the maximum number of available CPUs in a container. Enter yarn.scheduler.maximum-allocation-mb in the search box, and set the maximum available memory of a container. The instance level cannot be changed. The parameter values need to be changed in the configuration of the Yarn service, and the Yarn service needs to be restarted for the changes to take effect.

  4. Click Save Configuration, select Restart the affected services or instances, and click OK to restart the NodeManager role instance.

    Operation succeeded is displayed. Click Finish. The NodeManager role instance is started successfully.

For MRS 3.x or later, perform the following operations:

  1. Choose Cluster > Name of the desired cluster > Services > Yarn > Instance.
  2. Click the role instance name corresponding to the node where NodeManager is deployed, switch to Instance Configuration, and select All Configurations.
  3. Enter yarn.nodemanager.resource.cpu-vcores in the search box, and set the number of vCPUs that can be used by NodeManager on the current node. You are advised to set this parameter to 1.5 to 2 times the number of actual logical CPUs on the node. Enter yarn.nodemanager.resource.memory-mb in the search box, and set the physical memory size that can be used by NodeManager on the current node. You are advised to set this parameter to 75% of the actual physical memory size of the node.

    Enter yarn.scheduler.maximum-allocation-vcores in the search box, and set the maximum number of available CPUs in a container. Enter yarn.scheduler.maximum-allocation-mb in the search box, and set the maximum available memory of a container. The instance level cannot be changed. The parameter values need to be changed in the configuration of the Yarn service, and the Yarn service needs to be restarted for the changes to take effect.

  4. Click Save, and then click OK. to restart the NodeManager role instance.

    A message is displayed, indicating that the operation is successful. Click Finish. The NodeManager role instance is started successfully.