Help Center/ MapReduce Service/ User Guide (ME-Abu Dhabi Region)/ Overview/ Components/ Yarn/ Relationship Between Yarn and Other Components
Updated on 2022-12-08 GMT+08:00

Relationship Between Yarn and Other Components

Relationship Between Yarn and Spark

The Spark computing and scheduling can be implemented using Yarn mode. Spark enjoys the computing resources provided by Yarn clusters and runs tasks in a distributed way. Spark on Yarn has two modes: Yarn-cluster and Yarn-client.

  • Yarn Cluster mode

    Figure 1 describes the operation framework.

    Figure 1 Spark on Yarn-cluster operation framework

    Spark on Yarn-cluster implementation process:

    1. The client generates the application information, and then sends the information to ResourceManager.
    2. ResourceManager allocates the first container (ApplicationMaster) to SparkApplication and starts the driver on the container.
    3. ApplicationMaster applies for resources from ResourceManager to run the container.

      ResourceManager allocates the containers to ApplicationMaster, which communicates with the related NodeManagers and starts the executor in the obtained container. After the executor is started, it registers with drivers and applies for tasks.

    4. Drivers allocate tasks to the executors.
    5. Executors run tasks and report the operating status to Drivers.
  • Yarn Client mode

    Figure 2 describes the operation framework.

    Figure 2 Spark on Yarn-client operation framework

    Spark on Yarn-client implementation process:

    In Yarn-client mode, the driver is deployed and started on the client. In Yarn-client mode, the client of an earlier version is incompatible. You are advised to use the Yarn-cluster mode.

    1. The client sends the Spark application request to ResourceManager, then ResourceManager returns the results. The results include information such as Application ID and the maximum and minimum available resources. The client packages all information required to start ApplicationMaster, and sends the information to ResourceManager.
    2. After receiving the request, ResourceManager finds a proper node for ApplicationMaster and starts it on this node. ApplicationMaster is a role in Yarn, and the process name in Spark is ExecutorLauncher.
    3. Based on the resource requirements of each task, ApplicationMaster can apply for a series of containers to run tasks from ResourceManager.
    4. After receiving the newly allocated container list (from ResourceManager), ApplicationMaster sends information to the related NodeManagers to start the containers.

      ResourceManager allocates the containers to ApplicationMaster, which communicates with the related NodeManagers and starts the executor in the obtained container. After the executor is started, it registers with drivers and applies for tasks.

      Running containers are not suspended and resources are not released.

    5. Drivers allocate tasks to the executors. Executors run tasks and report the operating status to Drivers.

Relationship Between Yarn and MapReduce

MapReduce is a computing framework running on Yarn, which is used for batch processing. MRv1 is implemented based on MapReduce in Hadoop 1.0, which is composed of programming models (new and old programming APIs), running environment (JobTracker and TaskTracker), and data processing engine (MapTask and ReduceTask). This framework is still weak in scalability, fault tolerance (JobTracker SPOF), and compatibility with multiple frameworks. (Currently, only the MapReduce computing framework is supported.) MRv2 is implemented based on MapReduce in Hadoop 2.0. The source code reuses MRv1 programming models and data processing engine implementation, and the running environment is composed of ResourceManager and ApplicationMaster. ResourceManager is a brand new resource manager system, and ApplicationMaster is responsible for cutting MapReduce job data, assigning tasks, applying for resources, scheduling tasks, and tolerating faults.

Relationship Between Yarn and ZooKeeper

Figure 3 shows the relationship between ZooKeeper and Yarn.

Figure 3 Relationship Between ZooKeeper and Yarn
  1. When the system is started, ResourceManager attempts to write state information to ZooKeeper. ResourceManager that first writes state information to ZooKeeper is selected as the active ResourceManager, and others are standby ResourceManagers. The standby ResourceManagers periodically monitor active ResourceManager election information in ZooKeeper.
  2. The active ResourceManager creates the Statestore directory in ZooKeeper to store application information. If the active ResourceManager is faulty, the standby ResourceManager obtains application information from the Statestore directory and restores the data.

Relationship Between Yarn and Tez

The Hive on Tez job information requires the TimeLine Server capability of Yarn so that Hive tasks can display the current and historical status of applications, facilitating storage and retrieval.