Executor Memory Reaches the Threshold Is Displayed in Driver
Symptom
A Spark task fails to be submitted due to excessive memory usage.
Cause Analysis
16/02/06 14:11:25 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (6144 MB per container) 16/02/06 14:11:29 ERROR SparkContext: Error initializing SparkContext. java.lang.IllegalArgumentException: Required executor memory (10240+1024 MB) is above the max threshold (6144 MB) of this cluster!
Spark tasks are submitted to Yarn and the resources used by the Executor to run tasks are managed by Yarn. From the error message, you can see that when a user starts the Executor, 10 GB memory is specified, which exceeds the upper memory limit of each Container set by Yarn. As a result, the task cannot be started.
Solution
Modify the Yarn configuration to increase the restriction on containers. For example, you can adjust parameter yarn.scheduler.maximum-allocation-mb to control the resources for starting the Executor. Restart the Yarn service after the modification.
You can modify the configuration as follows:
MRS Manager:
- Log in to MRS Manager.
- Choose Services > Yarn > Service Configuration and set Type to All.
- In Search, enter yarn.scheduler.maximum-allocation-mb to modify the parameter, save the configuration, and then restart the service. See the following figure.
Figure 1 Modifying Yarn service parameters
FusionInsight Manager:
- Log in to FusionInsight Manager.
- Choose Cluster > Service > Yarn. Click Configurations and select All Configurations.
- In Search, enter yarn.scheduler.maximum-allocation-mb to modify the parameter, save the configuration, and then restart the service.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.