What Should I Do If the spark.yarn.executor.memoryOverhead Setting Does Not Take Effect?
Symptom
The overhead memory of the executor needs to be adjusted for Spark tasks. The spark.yarn.executor.memoryOverhead parameter is set to 4096. However, the default value 1024 is used to apply for resources during actual computation.
Fault Locating
In Spark 2.3 and later versions, use the new parameter spark.executor.memoryOverhead to set the overhead memory of the executor. If both old and new parameters are set, the value of spark.yarn.executor.memoryOverhead does not take effect, and the value of spark.executor.memoryOverhead is used.
Same thing happens if you use spark.driver.memoryOverhead to set the overhead memory of driver.
Procedure
Use the new parameter:
spark.executor.memoryOverhead=4096
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot