Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Hive metatstore won't start after Ambari Blueprint deployment - using PosstgreSQL datastore

Highlighted

Hive metatstore won't start after Ambari Blueprint deployment - using PosstgreSQL datastore

Explorer

I am deploying my cluster using an Anbari Blueprint. Everything works but the Hive Metatstore will not start. I get the following error:

2018-02-08 11:29:58,480 INFO  [main]: metastore.MetaStoreDirectSql (MetaStoreDirectSql.java:<init>(163)) - Using direct SQL, underlying DB is OTHER
2018-02-08 11:29:58,485 INFO  [main]: metastore.ObjectStore (ObjectStore.java:setConf(297)) - Initialized ObjectStore
2018-02-08 11:29:58,896 WARN  [main]: metastore.ObjectStore (ObjectStore.java:getDatabase(698)) - Failed to get database default, returning NoSuchObjectException
2018-02-08 11:29:59,351 ERROR [main]: hive.log (MetaStoreUtils.java:logAndThrowMetaException(1254)) - Got exception: java.io.IOException Couldn't create proxy provider class org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
java.io.IOException: Couldn't create proxy provider class org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
        at org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:526)
        at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:171)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:690)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:631)
        at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:160)
        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2795)
        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:99)
        at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2829)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2811)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:390)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:179)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:374)
        at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
        at org.apache.hadoop.hive.metastore.Warehouse.getFs(Warehouse.java:106)
        at org.apache.hadoop.hive.metastore.Warehouse.getDnsPath(Warehouse.java:142)
        at org.apache.hadoop.hive.metastore.Warehouse.getDnsPath(Warehouse.java:148)
        at org.apache.hadoop.hive.metastore.Warehouse.getWhRoot(Warehouse.java:161)
        at org.apache.hadoop.hive.metastore.Warehouse.getDefaultDatabasePath(Warehouse.java:174)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB_core(HiveMetaStore.java:628)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:647)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:433)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
        at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105)
        at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:78)
        at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:91)
        at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:6396)
        at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:6391)
        at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:6658)
        at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:6575)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

Notice that it declares the underlying DB as "OTHER" not "POSTGRESQL". My Hive configuration is:

{
"hive-env": {
"properties": {
"hive_database": "Existing PostgreSQL Database",
"hive_database_name": "hive",
"hive_user": "hive",
"hive_database_type": "postgres",
"hive_ambari_database": "PostgreSQL",
"enable_hive_interactive": "false"
}
}
},
{
"hive-site": {
"properties": {
"javax.jdo.option.ConnectionDriverName": "org.postgresql.Driver",
"javax.jdo.option.ConnectionURL": "jdbc:postgresql://vicads-server.vicads5.local:5432/hive",
"javax.jdo.option.ConnectionUserName": "hive",
"javax.jdo.option.ConnectionPassword": "password",
"hive.zookeeper.quorum": "%HOSTGROUP::vicads_server_1%:2181,%HOSTGROUP::vicads_server_2%:2181,%HOSTGROUP::vicads_vault_1%:2181",
"hive.cluster.delegation.token.store.zookeeper.connectString": "%HOSTGROUP::vicads_server_1%:2181,%HOSTGROUP::vicads_server_2%:2181,%HOSTGROUP::vicads_vault_1%:2181",
"hive.metastore.uris": "thrift://%HOSTGROUP::vicads_server_1%:9083"
}
}
},

Am I doing something wrong? I do see that it creates the Hive tables in the PostgreSQL database. David

4 REPLIES 4
Highlighted

Re: Hive metatstore won't start after Ambari Blueprint deployment - using PosstgreSQL datastore

Hi David

Did you solve this problem ?

Best regards.

Eric.

Highlighted

Re: Hive metatstore won't start after Ambari Blueprint deployment - using PosstgreSQL datastore

New Contributor

Hello Eric,

We have met the same issue and the root cause was the absence of HDFS client on the metastore server.

Regards,

Highlighted

Re: Hive metatstore won't start after Ambari Blueprint deployment - using PosstgreSQL datastore

Explorer

Yes, but I don't remember the specifics.

Highlighted

Re: Hive metatstore won't start after Ambari Blueprint deployment - using PosstgreSQL datastore

Hello Clement and David

Yes, the Failed to get database default, returning NoSuchObjectException is not a very clear message, this mean that the default Hive database on hdfs is not reachable : hdfs://clustername/user/hive/warehouse/

So this is not a postgresql problem but a hdfs client problem.

Regards

Don't have an account?
Coming from Hortonworks? Activate your account here