Relationship Between Spark and Other Components
Relationship Between Spark and HDFS
Data computed by Spark comes from multiple data sources, such as local files and HDFS. Most data comes from HDFS which can read data in large scale for parallel computing After being computed, data can be stored in HDFS.
Spark involves Driver and Executor. Driver schedules tasks and Executor runs tasks.
Figure 1 describes the file reading process.
- Driver interconnects with HDFS to obtain the information of File A.
- The HDFS returns the detailed block information about this file.
- Driver sets a parallel degree based on the block data amount, and creates multiple tasks to read the blocks of this file.
- Executor runs the tasks and reads the detailed blocks as part of the Resilient Distributed Dataset (RDD).
Figure 2 describes the file writing process.
- Driver creates a directory where the file is to be written.
- Based on the RDD distribution status, the number of tasks related to data writing is computed, and these tasks are sent to Executor.
- Executor runs these tasks, and writes the RDD data to the directory created in 1.
Relationship with Yarn
The Spark computing and scheduling can be implemented using Yarn mode. Spark enjoys the computing resources provided by Yarn clusters and runs tasks in a distributed way. Spark on Yarn has two modes: Yarn-cluster and Yarn-client.
- Yarn-cluster mode
Figure 3 describes the operation framework.
Spark on Yarn-cluster implementation process:
- The client generates the application information, and then sends the information to ResourceManager.
- ResourceManager allocates the first container (ApplicationMaster) to SparkApplication and starts the driver on the container.
- ApplicationMaster applies for resources from ResourceManager to run the container.
ResourceManager allocates the containers to ApplicationMaster, which communicates with the related NodeManagers and starts the executor in the obtained container. After the executor is started, it registers with drivers and applies for tasks.
- Drivers allocate tasks to the executors.
- Executors run tasks and report the operating status to Drivers.
- Yarn-client mode
Figure 4 describes the operation framework.
Spark on Yarn-client implementation process:
In Yarn-client mode, the Driver is deployed and started on the client. In Yarn-client mode, the client of an earlier version is incompatible. The Yarn-cluster mode is recommended.
- The client sends the Spark application request to ResourceManager, and packages all information required to start ApplicationMaster and sends the information to ResourceManager. ResourceManager then returns the results to the client. The results include information such as ApplicationId, and the upper limit as well as lower limit of available resources. After receiving the request, ResourceManager finds a proper node for ApplicationMaster and starts it on this node. ApplicationMaster is a role in Yarn, and the process name in Spark is ExecutorLauncher.
- Based on the resource requirements of each task, ApplicationMaster can apply for a series of containers to run tasks from ResourceManager.
- After receiving the newly allocated container list (from ResourceManager), ApplicationMaster sends information to the related NodeManagers to start the containers.
ResourceManager allocates the containers to ApplicationMaster, which communicates with the related NodeManagers and starts the executor in the obtained container. After the executor is started, it registers with drivers and applies for tasks.
Running Containers will not be suspended to release resources.
- Drivers allocate tasks to the executors. Executors run tasks and report the operating status to Drivers.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot