Task Execution Fails Because of Stack Memory Overflow
Symptom
When Hive performs a query operation, error "Error running child: java.lang.StackOverflowError" is reported. The error details are as follows:
FATAL [main] org.apache.hadoop.mapred.YarnChild: Error running child : java.lang.StackOverflowError at org.apache.hive.come.esotericsoftware.kryo.io.Input.readVarInt(Input.java:355) at org.apache.hive.come.esotericsoftware.kryo.util.DefautClassResolver.readName(DefautClassResolver.java:127) at org.apache.hive.come.esotericsoftware.kryo.util.DefautClassResolver.readClass(DefautClassResolver.java:115) at org.apache.hive.come.esotericsoftware.kryo.Kryo.readClass(Kryo.java.656) at org.apache.hive.come.esotericsoftware.kryo.kryo.readClassAnd0bject(Kryo.java:767) at org.apache.hive.come.esotericsoftware.kryo.serializers.collectionSerializer.read(CollectionSerializer.java:112)
Cause Analysis
Error "java.lang.StackOverflowError" indicates the memory overflow of the thread stack. It may occur if there are multiple levels of calls (for example, infinite recursive calls) or the thread stack is too small.
Solution
Adjust the stack memory in the JVM parameters of the Map and Reduce stages during execution of a MapReduce job, that is, mapreduce.map.java.opts (adjusting the stack memory of Map) and mapreduce.reduce.java.opts (adjusting the stack memory of Reduce). The following uses the mapreduce.map.java.opts parameter as an example.
- To increase the Map memory temporarily (only valid for Beeline):
Run the set mapreduce.map.java.opts=-Xss8G; command on the Beeline client. (Change the value as required.)
- To permanently increase the Map memory specified by the mapreduce.map.memory.mb and mapreduce.map.java.opts parameters:
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot