Help Center> MapReduce Service> User Guide (Paris Region)> Troubleshooting> Using Yarn> "GC overhead" Is Displayed on the Client When Tasks Are Submitted Using the Hadoop Jar Command
Updated on 2022-12-14 GMT+08:00

"GC overhead" Is Displayed on the Client When Tasks Are Submitted Using the Hadoop Jar Command

Symptom

When a user submits a task on the client, the client returns a memory overflow error.

Cause Analysis

According to the error stack, the memory overflows when the HDFS files are read during task submission. Generally, the memory is insufficient because the task needs to read a large number of small files.

Solution

  1. Check whether multiple HDFS files need to be read for the started MapReduce tasks. If yes, reduce the file quantity by combining the small-sized files in advance or using combineInputFormat.
  2. Increase the memory when the hadoop command is run. The memory is set on the client. Change the value of -Xmx in CLIENT_GC_OPTS in the Client installation directory/HDFS/component_env file to a larger value, for example, 512 MB. Run the source component_env command for the modification to take effect.