Help Center/ MapReduce Service/ Component Operation Guide (ME-Abu Dhabi Region)/ Using CarbonData (for MRS 3.x or Later)/ CarbonData FAQ/ Why Does Data Query or Loading Fail and "org.apache.carbondata.core.memory.MemoryException: Not enough memory" Is Displayed?
Updated on 2022-02-22 GMT+08:00

Why Does Data Query or Loading Fail and "org.apache.carbondata.core.memory.MemoryException: Not enough memory" Is Displayed?

Question

Why does data query or loading fail and "org.apache.carbondata.core.memory.MemoryException: Not enough memory" is displayed?

Answer

This exception is thrown when the out-of-heap memory required for data query and loading in the executor is insufficient.

In this case, increase the values of carbon.unsafe.working.memory.in.mb and spark.yarn.executor.memoryOverhead.

For details, see How Do I Configure Unsafe Memory in CarbonData?.

The memory is shared by data query and loading. Therefore, if the loading and query operations need to be performed at the same time, you are advised to set carbon.unsafe.working.memory.in.mb and spark.yarn.executor.memoryOverhead to a value greater than 2,048 MB.

The following formula can be used for estimation:

Memory required for data loading:

carbon.number.of.cores.while.loading [default value is 6] x Number of tables to load in parallel x offheap.sort.chunk.size.inmb [default value is 64 MB] + carbon.blockletgroup.size.in.mb [default value is 64 MB] + Current compaction ratio [64 MB/3.5])

= Around 900 MB per table

Memory required for data query:

(SPARK_EXECUTOR_INSTANCES. [default value is 2] x (carbon.blockletgroup.size.in.mb [default value: 64 MB] + carbon.blockletgroup.size.in.mb [default value = 64 MB x 3.5) x Number of cores per executor [default value: 1])

= ~ 600 MB