Help Center/ MapReduce Service/ User Guide (Paris Region)/ Overview/ Components/ Spark2x/ Relationship Between Spark2x and Other Components
Updated on 2024-10-11 GMT+08:00

Relationship Between Spark2x and Other Components

Relationship Between Spark and HDFS

Data computed by Spark comes from multiple data sources, such as local files and HDFS. Most data comes from HDFS which can read data in large scale for parallel computing After being computed, data can be stored in HDFS.

Spark involves Driver and Executor. Driver schedules tasks and Executor runs tasks.

Figure 1 describes the file reading process.

Figure 1 File reading process
The file reading process is as follows:
  1. Driver interconnects with HDFS to obtain the information of File A.
  2. The HDFS returns the detailed block information about this file.
  3. Driver sets a parallel degree based on the block data amount, and creates multiple tasks to read the blocks of this file.
  4. Executor runs the tasks and reads the detailed blocks as part of the Resilient Distributed Dataset (RDD).

Figure 2 describes the file writing process.

Figure 2 File writing process
The file writing process is as follows:
  1. Driver creates a directory where the file is to be written.
  2. Based on the RDD distribution status, the number of tasks related to data writing is computed, and these tasks are sent to Executor.
  3. Executor runs these tasks, and writes the RDD data to the directory created in 1.

Relationship with Yarn

The Spark computing and scheduling can be implemented using Yarn mode. Spark enjoys the computing resources provided by Yarn clusters and runs tasks in a distributed way. Spark on Yarn has two modes: Yarn-cluster and Yarn-client.

  • Yarn-cluster mode

    Figure 3 describes the operation framework.

    Figure 3 Spark on Yarn-cluster operation framework

    Spark on Yarn-cluster implementation process:

    1. The client generates the application information, and then sends the information to ResourceManager.
    2. ResourceManager allocates the first container (ApplicationMaster) to SparkApplication and starts the driver on the container.
    3. ApplicationMaster applies for resources from ResourceManager to run the container.

      ResourceManager allocates the containers to ApplicationMaster, which communicates with the related NodeManagers and starts the executor in the obtained container. After the executor is started, it registers with drivers and applies for tasks.

    4. Drivers allocate tasks to the executors.
    5. Executors run tasks and report the operating status to Drivers.
  • Yarn-client mode

    Figure 4 describes the operation framework.

    Figure 4 Spark on Yarn-client operation framework

    Spark on Yarn-client implementation process:

    In Yarn-client mode, the Driver is deployed and started on the client. In Yarn-client mode, the client of an earlier version is incompatible. The Yarn-cluster mode is recommended.

    1. The client sends the Spark application request to ResourceManager, and packages all information required to start ApplicationMaster and sends the information to ResourceManager. ResourceManager then returns the results to the client. The results include information such as ApplicationId, and the upper limit as well as lower limit of available resources. After receiving the request, ResourceManager finds a proper node for ApplicationMaster and starts it on this node. ApplicationMaster is a role in Yarn, and the process name in Spark is ExecutorLauncher.
    2. Based on the resource requirements of each task, ApplicationMaster can apply for a series of containers to run tasks from ResourceManager.
    3. After receiving the newly allocated container list (from ResourceManager), ApplicationMaster sends information to the related NodeManagers to start the containers.

      ResourceManager allocates the containers to ApplicationMaster, which communicates with the related NodeManagers and starts the executor in the obtained container. After the executor is started, it registers with drivers and applies for tasks.

      Running Containers will not be suspended to release resources.

    4. Drivers allocate tasks to the executors. Executors run tasks and report the operating status to Drivers.