Help Center/ MapReduce Service/ User Guide (Kuala Lumpur Region)/ MRS Cluster Component Operation Gudie/ Using Spark2x/ Common Issues About Spark2x/ Spark SQL and DataFrame/ Why Is Memory Insufficient if 10 Terabytes of TPCDS Test Suites Are Consecutively Run in Beeline/JDBCServer Mode?
Updated on 2022-08-12 GMT+08:00

Why Is Memory Insufficient if 10 Terabytes of TPCDS Test Suites Are Consecutively Run in Beeline/JDBCServer Mode?

Question

When the driver memory is set to 10 GB and the 10 TB TPCDS test suites are continuously run in Beeline/JDBCServer mode, SQL statements fail to be executed due to insufficient driver memory. Why?

Answer

By default, 1000 UI data records of jobs and stages are reserved in the memory.

The function of overflowing UI data to disks has been added to optimize large clusters. The overflow condition is that the size of UI data in each stage reaches the minimum threshold 5 MB. If the number of tasks in each stage is small, the size of UI data in the stage may not reach the threshold. As a result, the UI data in the stage is cached in the memory until the number of UI data records reaches the upper limit (1000 by default). Only then the old UI data is cleared from the memory.

Therefore, before the old UI data is cleared, the UI data occupies a large amount of memory. As a result, the driver memory is insufficient when 10 terabytes of TPCDS test suites are executed.

Workaround:

  • Set spark.ui.retainedJobs and spark.ui.retainedStages based on service requirements to specify the number of UI data records of jobs and stages to be reserved. For details, see Table 13 in Common Parameters.
  • If a large amount of UI data of jobs and stages needs to be reserved, increase the memory of the driver by setting the spark.driver.memory parameter. For details, see Table 10 in Common Parameters.