Help Center/ MapReduce Service/ Component Operation Guide (LTS)/ Using HetuEngine/ Managing HetuEngine Compute Instances/ Configuring the Nodes on Which HetuEngine Coordinator Is Running
Updated on 2024-10-25 GMT+08:00

Configuring the Nodes on Which HetuEngine Coordinator Is Running

By default, coordinator and worker nodes randomly start on Yarn NodeManager nodes, and you have to open all ports on all NodeManager nodes. Using resource labels of Yarn, HetuEngine allows you to specify NodeManager nodes to run coordinators.

Prerequisites

You have created a user for accessing the HetuEngine web UI. For details, see Creating a HetuEngine Permission Role.

Procedure

  1. Log in to FusionInsight Manager as a user who can access the HetuEngine web UI.
  2. Set Yarn parameters to specify the scheduler to handle PlacementConstraints.

    1. Choose Cluster > Services > Yarn. Click the Configurations tab and then All Configurations. On the displayed page, search for yarn.resourcemanager.placement-constraints.handler, set Value to scheduler, and click Save.
    2. Click the Instance tab, select the active and standby ResourceManager instances, click More, and select Restart Instance to restart the ResourceManager instances of Yarn. Then wait until they are restarted successfully.

  3. Configure resource labels.

    1. Choose Tenant Resources > Resource Pool. On the displayed page, click Add Resource Pool.
    2. Select a cluster, and enter a resource pool name and a resource label name, for example, pool1. Select the desired hosts, click to add the selected hosts to the new resource pool, and click OK.

  4. Configure the queue capacity policy of a resource pool.

    1. In the navigation pane on the left, click Dynamic Resource Plan. In the Resource Distribution Policy tab, select the resource pool created in the previous step for Resource Pool.
    2. Locate the row that contains the target resource name in the Resource Allocation area, and click Modify in the Operation column.
    3. In the Modify Resource Allocation dialog box, set the resource capacity policy for a queue in the selected resource pool. Ensure that Maximum Resource is greater than 0. For details, see Configuring the Queue Capacity Policy of a Resource Pool.

  5. Set HetuEngine parameters to enable the coordinator placement policy and enter the node resource label.

    1. Choose Cluster > Service > HetuEngine. Click the Configurations tab and then All Configurations. On the displayed page, set parameters and click Save.
      Table 1 Setting HetuEngine parameters

      Parameter

      Setting

      yarn.hetuserver.engine.coordinator.placement.enabled

      true

      yarn.hetuserver.engine.coordinator.placement.label

      Node resource label created in 3, for example, pool1

    2. Click Dashboard, click More, and select Restart Service. Wait until the HetuEngine service is restarted successfully.

  6. Restart the HetuEngine compute instance.

    1. In the Basic Information area on the Dashboard page, click the link next to HSConsole WebUI. The HSConsole page is displayed.
    2. Stop the running compute instance and click Start in the Operation column to start the HetuEngine compute instance.

  7. Check the node on which the coordinator is running.

    1. Return to FusionInsight Manager.
    2. Choose Cluster > Services > Yarn. In the Basic Information area on the Dashboard page, click the link next to ResourceManager WebUI.
    3. In the navigation pane on the left, choose Cluster > Nodes. You can view that the coordinator has been started on the node in the resource pool created in 3.
      Figure 1 coordinator