Help Center/ MapReduce Service/ FAQs/ Performance Tuning/ What Should I Do If the spark.yarn.executor.memoryOverhead Setting Does Not Take Effect?
Updated on 2024-08-16 GMT+08:00

What Should I Do If the spark.yarn.executor.memoryOverhead Setting Does Not Take Effect?

Symptom

The overhead memory of the executor needs to be adjusted for Spark tasks. The spark.yarn.executor.memoryOverhead parameter is set to 4096. However, the default value 1024 is used to apply for resources during actual computation.

Fault Locating

In Spark 2.3 and later versions, use the new parameter spark.executor.memoryOverhead to set the overhead memory of the executor. If both old and new parameters are set, the value of spark.yarn.executor.memoryOverhead does not take effect, and the value of spark.executor.memoryOverhead is used.

Same thing happens if you use spark.driver.memoryOverhead to set the overhead memory of driver.

Procedure

Use the new parameter:

spark.executor.memoryOverhead=4096