Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Navigator not loading all hive databases after upgrade

avatar
Expert Contributor

Hi,

 

We have recently upgraded from 5.11.2 to 5.14.2. The upgrade was untill we identified that many databases were not visible/loaded in navigator. The logs have below. Any related pointers are welcome.

 

2018-08-31 10:12:17,344 ERROR com.cloudera.nav.hive.extractor.AbstractHiveExtractor [CDHExecutor-0-CDHUrlClassLoader@7fbf5fdd]: Failed to extract database dummy_database with error: java.net.SocketException: Broken pipe (Write failed)
org.apache.thrift.transport.TTransportException: java.net.SocketException: Broken pipe (Write failed)
at org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:147)
at org.apache.thrift.transport.TTransport.write(TTransport.java:107)
at org.apache.thrift.transport.TSaslTransport.writeLength(TSaslTransport.java:391)
at org.apache.thrift.transport.TSaslTransport.flush(TSaslTransport.java:499)
at org.apache.thrift.transport.TSaslClientTransport.flush(TSaslClientTransport.java:37)
at org.apache.hadoop.hive.thrift.TFilterTransport.flush(TFilterTransport.java:77)
at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:73)
at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:62)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.send_get_database(ThriftHiveMetastore.java:664)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_database(ThriftHiveMetastore.java:656)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getDatabase(HiveMetaStoreClient.java:1213)
at com.cloudera.nav.hive.extractor.AbstractHiveExtractor.extractDatabase(AbstractHiveExtractor.java:147)
at com.cloudera.nav.hive.extractor.AbstractHiveExtractor.extractDatabases(AbstractHiveExtractor.java:133)
at com.cloudera.nav.hive.extractor.HiveExtractor.run(HiveExtractor.java:63)
at com.cloudera.nav.hive.extractor.AbstractHiveExtractor.run(AbstractHiveExtractor.java:118)
at com.cloudera.nav.hive.extractor.HiveExtractorShim.run(HiveExtractorShim.java:35)
at com.cloudera.cmf.cdhclient.CdhExecutor$RunnableWrapper.call(CdhExecutor.java:221)
at com.cloudera.cmf.cdhclient.CdhExecutor$RunnableWrapper.call(CdhExecutor.java:211)
at com.cloudera.cmf.cdhclient.CdhExecutor$CallableWrapper.doWork(CdhExecutor.java:236)
at com.cloudera.cmf.cdhclient.CdhExecutor$SecurityWrapper$1.run(CdhExecutor.java:189)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
at com.cloudera.cmf.cdh5client.security.UserGroupInformationImpl.doAs(UserGroupInformationImpl.java:44)
at com.cloudera.cmf.cdhclient.CdhExecutor$SecurityWrapper.doWork(CdhExecutor.java:186)
at com.cloudera.cmf.cdhclient.CdhExecutor$1.call(CdhExecutor.java:125)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketException: Broken pipe (Write failed)
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111)

 

 

Thanks

1 ACCEPTED SOLUTION

avatar
Expert Contributor

This is because navigator upgrade generally takes lot of time based on the number of objects and relations you have. Increasing the navigator heap size can help. Calculation of the required heap is available in cloudera site.

View solution in original post

1 REPLY 1

avatar
Expert Contributor

This is because navigator upgrade generally takes lot of time based on the number of objects and relations you have. Increasing the navigator heap size can help. Calculation of the required heap is available in cloudera site.