Help Center/ MapReduce Service/ User Guide (ME-Abu Dhabi Region)/ Troubleshooting/ Using Spark/ Executor Memory Reaches the Threshold Is Displayed in Driver
Updated on 2022-12-08 GMT+08:00

Executor Memory Reaches the Threshold Is Displayed in Driver

Symptom

A Spark task fails to be submitted due to excessive memory usage.

Cause Analysis

The Driver log prints that the applied Executor memory exceeds the cluster limit.
16/02/06 14:11:25 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (6144 MB per container)
16/02/06 14:11:29 ERROR SparkContext: Error initializing SparkContext.
java.lang.IllegalArgumentException: Required executor memory (10240+1024 MB) is above the max threshold (6144 MB) of this cluster!

Spark tasks are submitted to Yarn and the resources used by the Executor to run tasks are managed by Yarn. From the error message, you can see that when a user starts the Executor, 10 GB memory is specified, which exceeds the upper memory limit of each Container set by Yarn. As a result, the task cannot be started.

Solution

Modify the Yarn configuration to increase the restriction on containers. For example, you can adjust parameter yarn.scheduler.maximum-allocation-mb to control the resources for starting the Executor. Restart the Yarn service after the modification.

You can modify the configuration as follows:

MRS Manager:

  1. Log in to MRS Manager.
  2. Choose Services > Yarn > Service Configuration and set Type to All.
  3. In Search, enter yarn.scheduler.maximum-allocation-mb to modify the parameter, save the configuration, and then restart the service. See the following figure.

    Figure 1 Modifying Yarn service parameters

FusionInsight Manager:

  1. Log in to FusionInsight Manager.
  2. Choose Cluster > Service > Yarn. Click Configurations and select All Configurations.
  3. In Search, enter yarn.scheduler.maximum-allocation-mb to modify the parameter, save the configuration, and then restart the service.