Help Center/
MapReduce Service/
User Guide (ME-Abu Dhabi Region)/
Troubleshooting/
Using Hive/
beeline Reports the "OutOfMemoryError" Error
Updated on 2022-12-08 GMT+08:00
beeline Reports the "OutOfMemoryError" Error
Symptom
When a large amount of data is queried on the Beeline client, the message "OutOFMemoryError: Java heap space" is displayed. The detailed error information is as follows:
org.apache.thrift.TException: Error in calling method FetchResults at org.apache.hive.jdbc.HiveConnection$SynchronizedHandler.invoke(HiveConnection.java:1514) at com.sun.proxy.$Proxy4.FetchResults(Unknown Source) at org.apache.hive.jdbc.HiveQueryResultSet.next(HiveQueryResultSet.java:358) at org.apache.hive.beeline.BufferedRows.<init>(BufferedRows.java:42) at org.apache.hive.beeline.BeeLine.print(BeeLine.java:1856) at org.apache.hive.beeline.Commands.execute(Commands.java:873) at org.apache.hive.beeline.Commands.sql(Commands.java:714) at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:1035) at org.apache.hive.beeline.BeeLine.execute(BeeLine.java:821) at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:778) at org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:486) at org.apache.hive.beeline.BeeLine.main(BeeLine.java:469) Caused by: java.lang.OutOfMemoryError: Java heap space at com.sun.crypto.provider.CipherCore.doFinal(CipherCore.java:959) at com.sun.crypto.provider.CipherCore.doFinal(CipherCore.java:824) at com.sun.crypto.provider.AESCipher.engineDoFinal(AESCipher.java:436) at javax.crypto.Cipher.doFinal(Cipher.java:2223) at sun.security.krb5.internal.crypto.dk.AesDkCrypto.decryptCTS(AesDkCrypto.java:414) at sun.security.krb5.internal.crypto.dk.AesDkCrypto.decryptRaw(AesDkCrypto.java:291) at sun.security.krb5.internal.crypto.Aes256.decryptRaw(Aes256.java:86) at sun.security.jgss.krb5.CipherHelper.aes256Decrypt(CipherHelper.java:1397) at sun.security.jgss.krb5.CipherHelper.decryptData(CipherHelper.java:576) at sun.security.jgss.krb5.WrapToken_v2.getData(WrapToken_v2.java:130) at sun.security.jgss.krb5.WrapToken_v2.getData(WrapToken_v2.java:105) at sun.security.jgss.krb5.Krb5Context.unwrap(Krb5Context.java:1058) at sun.security.jgss.GSSContextImpl.unwrap(GSSContextImpl.java:403) at com.sun.security.sasl.gsskerb.GssKrb5Base.unwrap(GssKrb5Base.java:77) at org.apache.thrift.transport.TSaslTransport$SaslParticipant.unwrap(TSaslTransport.java:559) at org.apache.thrift.transport.TSaslTransport.readFrame(TSaslTransport.java:462) at org.apache.thrift.transport.TSaslTransport.read(TSaslTransport.java:435) at org.apache.thrift.transport.TSaslClientTransport.read(TSaslClientTransport.java:37) at org.apache.thrift.transport.TTransport.xxx(TTransport.java:86) at org.apache.hadoop.hive.thrift.TFilterTransport.xxx(TFilterTransport.java:62) at org.apache.thrift.protocol.TBinaryProtocol.xxx(TBinaryProtocol.java:429) at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318) at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:77) at org.apache.hive.service.cli.thrift.TCLIService$Client.recv_FetchResults(TCLIService.java:505) at org.apache.hive.service.cli.thrift.TCLIService$Client.FetchResults(TCLIService.java:492) at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hive.jdbc.HiveConnection$SynchronizedHandler.invoke(HiveConnection.java:1506) at com.sun.proxy.$Proxy4.FetchResults(Unknown Source) at org.apache.hive.jdbc.HiveQueryResultSet.next(HiveQueryResultSet.java:358) Error: Error retrieving next row (state=,code=0)
Cause Analysis
- The data volume is excessively large.
- Users use the select * from table_name; statement for query in the whole table. There is a large amount of data in the table.
- The default startup memory of Beeline is 128 MB. The returned result set is too large during query, overloading Beeline.
Solution
- Before running select count(*) from table_name;, check the amount of data to be queried and determine whether to display data of this magnitude in Beeline.
- If a certain amount of data needs to be displayed, adjust the JVM parameter of the Hive client. Add export HIVE_OPTS=-Xmx1024M (change the value based on service requirements) to component_env in the /Hive directory of the Hive client. Run the source command to obtain the /bigdata_env directory on the client.
Parent topic: Using Hive
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
The system is busy. Please try again later.
For any further questions, feel free to contact us through the chatbot.
Chatbot