Created on 05-04-2016 02:50 AM - edited 09-16-2022 03:17 AM
Hi All,
We started getting very frequently OutofMemory error for Hive Metastore database. Can you please let us know what could be the cause of this?
Exception in thread "pool-1-thread-145" java.lang.OutOfMemoryError: Java heap space
at java.nio.ByteBuffer.wrap(ByteBuffer.java:350)
at java.lang.StringCoding$StringDecoder.decode(StringCoding.java:137)
at java.lang.StringCoding.decode(StringCoding.java:173)
at java.lang.String.<init>(String.java:443)
at java.lang.String.<init>(String.java:515)
at org.apache.thrift.protocol.TBinaryProtocol.readStringBody(TBinaryProtocol.java:355)
at org.apache.thrift.protocol.TBinaryProtocol.readString(TBinaryProtocol.java:347)
at org.apache.hadoop.hive.metastore.api.FieldSchema$FieldSchemaStandardScheme.read(FieldSchema.java:490)
at org.apache.hadoop.hive.metastore.api.FieldSchema$FieldSchemaStandardScheme.read(FieldSchema.java:476)
at org.apache.hadoop.hive.metastore.api.FieldSchema.read(FieldSchema.java:410)
at org.apache.hadoop.hive.metastore.api.StorageDescriptor$StorageDescriptorStandardScheme.read(StorageDescriptor.java:1309)
at org.apache.hadoop.hive.metastore.api.StorageDescriptor$StorageDescriptorStandardScheme.read(StorageDescriptor.java:1288)
at org.apache.hadoop.hive.metastore.api.StorageDescriptor.read(StorageDescriptor.java:1150)
at org.apache.hadoop.hive.metastore.api.Table$TableStandardScheme.read(Table.java:1393)
at org.apache.hadoop.hive.metastore.api.Table$TableStandardScheme.read(Table.java:1330)
at org.apache.hadoop.hive.metastore.api.Table.read(Table.java:1186)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$create_table_args$create_table_argsStandardScheme.read(ThriftHiveMetastore.java:19529)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$create_table_args$create_table_argsStandardScheme.read(ThriftHiveMetastore.java:19514)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$create_table_args.read(ThriftHiveMetastore.java:19461)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:25)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:109)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:244)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at org.apache.thrift.transport.TServerSocket.acceptImpl(TServerSocket.java:114)
at org.apache.hadoop.hive.metastore.TServerSocketKeepAlive.acceptImpl(TServerSocketKeepAlive.java:39)
at org.apache.hadoop.hive.metastore.TServerSocketKeepAlive.acceptImpl(TServerSocketKeepAlive.java:32)
at org.apache.thrift.transport.TServerTransport.accept(TServerTransport.java:31)
at org.apache.thrift.server.TThreadPoolServer.serve(TThreadPoolServer.java:131)
at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:4245)
at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:4147)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
Regards,
Ajay
Created 05-13-2016 12:34 AM
Hi All,
Thanks for your help. Heap memory was not sized as per recommendation given in below link and we had increased the memory and restarted the Hive metastore server which also did not help.
http://www.cloudera.com/documentation/enterprise/latest/topics/cdh_ig_hiveserver2_configure.html
Looks like there were some process which was holding/blocking memory and we had to restart the Complete cluster to resolve this problem.
Thank you once again for input.
Regards,
Ajay chaudhary
Created 05-12-2016 07:48 AM
Hello Ajay,
The first thing I would check is that you have the heap size set correctly. Memory requirements are discussed in the Cloudera Documentation under Configuring HiveServer2.
David Wilder, Community Manager
Created 05-13-2016 12:34 AM
Hi All,
Thanks for your help. Heap memory was not sized as per recommendation given in below link and we had increased the memory and restarted the Hive metastore server which also did not help.
http://www.cloudera.com/documentation/enterprise/latest/topics/cdh_ig_hiveserver2_configure.html
Looks like there were some process which was holding/blocking memory and we had to restart the Complete cluster to resolve this problem.
Thank you once again for input.
Regards,
Ajay chaudhary
Created 05-12-2016 07:51 AM