Support Questions

Find answers, ask questions, and share your expertise

OutOfMemoryError is very frequent on Hive Metastore server

avatar
Contributor

 Hi All,

 

We started getting very frequently OutofMemory error for Hive Metastore database. Can you please let us know what could be the cause of this? 

 

Exception in thread "pool-1-thread-145" java.lang.OutOfMemoryError: Java heap space
at java.nio.ByteBuffer.wrap(ByteBuffer.java:350)
at java.lang.StringCoding$StringDecoder.decode(StringCoding.java:137)
at java.lang.StringCoding.decode(StringCoding.java:173)
at java.lang.String.<init>(String.java:443)
at java.lang.String.<init>(String.java:515)
at org.apache.thrift.protocol.TBinaryProtocol.readStringBody(TBinaryProtocol.java:355)
at org.apache.thrift.protocol.TBinaryProtocol.readString(TBinaryProtocol.java:347)
at org.apache.hadoop.hive.metastore.api.FieldSchema$FieldSchemaStandardScheme.read(FieldSchema.java:490)
at org.apache.hadoop.hive.metastore.api.FieldSchema$FieldSchemaStandardScheme.read(FieldSchema.java:476)
at org.apache.hadoop.hive.metastore.api.FieldSchema.read(FieldSchema.java:410)
at org.apache.hadoop.hive.metastore.api.StorageDescriptor$StorageDescriptorStandardScheme.read(StorageDescriptor.java:1309)
at org.apache.hadoop.hive.metastore.api.StorageDescriptor$StorageDescriptorStandardScheme.read(StorageDescriptor.java:1288)
at org.apache.hadoop.hive.metastore.api.StorageDescriptor.read(StorageDescriptor.java:1150)
at org.apache.hadoop.hive.metastore.api.Table$TableStandardScheme.read(Table.java:1393)
at org.apache.hadoop.hive.metastore.api.Table$TableStandardScheme.read(Table.java:1330)
at org.apache.hadoop.hive.metastore.api.Table.read(Table.java:1186)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$create_table_args$create_table_argsStandardScheme.read(ThriftHiveMetastore.java:19529)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$create_table_args$create_table_argsStandardScheme.read(ThriftHiveMetastore.java:19514)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$create_table_args.read(ThriftHiveMetastore.java:19461)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:25)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:109)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:244)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at org.apache.thrift.transport.TServerSocket.acceptImpl(TServerSocket.java:114)
at org.apache.hadoop.hive.metastore.TServerSocketKeepAlive.acceptImpl(TServerSocketKeepAlive.java:39)
at org.apache.hadoop.hive.metastore.TServerSocketKeepAlive.acceptImpl(TServerSocketKeepAlive.java:32)
at org.apache.thrift.transport.TServerTransport.accept(TServerTransport.java:31)
at org.apache.thrift.server.TThreadPoolServer.serve(TThreadPoolServer.java:131)
at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:4245)
at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:4147)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:208)

 

Regards,

Ajay

1 ACCEPTED SOLUTION

avatar
Contributor

Hi All,

 

Thanks for your help. Heap memory was not sized as per recommendation given in below link and we had increased the memory and restarted the Hive metastore server which also did not help.

 

http://www.cloudera.com/documentation/enterprise/latest/topics/cdh_ig_hiveserver2_configure.html

 

Looks like there were some process which was holding/blocking memory and we had to restart the Complete cluster to resolve this problem.

 

Thank you once again for input.

 

Regards,

Ajay chaudhary

 

 

View solution in original post

3 REPLIES 3

avatar
Community Manager

Hello Ajay,

 

The first thing I would check is that you have the heap size set correctly.   Memory requirements are discussed in the Cloudera Documentation under Configuring HiveServer2.



David Wilder, Community Manager


Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.

Learn more about the Cloudera Community:

Terms of Service

Community Guidelines

How to use the forum

avatar
Contributor

Hi All,

 

Thanks for your help. Heap memory was not sized as per recommendation given in below link and we had increased the memory and restarted the Hive metastore server which also did not help.

 

http://www.cloudera.com/documentation/enterprise/latest/topics/cdh_ig_hiveserver2_configure.html

 

Looks like there were some process which was holding/blocking memory and we had to restart the Complete cluster to resolve this problem.

 

Thank you once again for input.

 

Regards,

Ajay chaudhary

 

 

avatar
Mentor
Does the very same stack trace appear every time the OOME crash occurs? If yes, it may be that one of your script is sending a bogus create table request with very large table names or column names. An crash dump (if enabled) of your HMS, if small enough to analyze, can help reveal what some of the identified parameters of the request would be.

If the point of OOME varies, then its likely that you're eventually running out of heap space, and you'd want to check the JVM memory graphs to see what the active heap utilised pattern looks like over time since last restart.