Support Questions

Find answers, ask questions, and share your expertise

Bad : The Hive Metastore canary failed to create a database.

avatar
New Contributor

1:08:30.213 PM ERROR RetryingHMSHandler
[pool-9-thread-200]: HMSHandler Fatal error: javax.jdo.JDODataStoreException: Exception thrown flushing changes to datastore
at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:543)
at org.datanucleus.api.jdo.JDOTransaction.commit(JDOTransaction.java:171)
at org.apache.hadoop.hive.metastore.ObjectStore.commitTransaction(ObjectStore.java:728)
at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101)
at com.sun.proxy.$Proxy27.commitTransaction(Unknown Source)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_core(HiveMetaStore.java:1821)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_with_environment_context(HiveMetaStore.java:1993)
at sun.reflect.GeneratedMethodAccessor83.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:140)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99)
at com.sun.proxy.$Proxy29.drop_table_with_environment_context(Unknown Source)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_table_with_environment_context.getResult(ThriftHiveMetastore.java:11157)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_table_with_environment_context.getResult(ThriftHiveMetastore.java:11141)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:594)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:589)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:589)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
NestedThrowablesStackTrace:
java.sql.BatchUpdateException: ORA-01461: can bind a LONG value only for insert into a LONG column

at oracle.jdbc.driver.OraclePreparedStatement.executeBatch(OraclePreparedStatement.java:10500)
at oracle.jdbc.driver.OracleStatementWrapper.executeBatch(OracleStatementWrapper.java:230)
at com.jolbox.bonecp.StatementHandle.executeBatch(StatementHandle.java:424)
at org.datanucleus.store.rdbms.ParamLoggingPreparedStatement.executeBatch(ParamLoggingPreparedStatement.java:366)
at org.datanucleus.store.rdbms.SQLController.processConnectionStatement(SQLController.java:676)
at org.datanucleus.store.rdbms.SQLController.processStatementsForConnection(SQLController.java:644)
at org.datanucleus.store.rdbms.SQLController$1.transactionFlushed(SQLController.java:731)
at org.datanucleus.store.connection.AbstractManagedConnection.transactionFlushed(AbstractManagedConnection.java:89)
at org.datanucleus.store.connection.ConnectionManagerImpl$2.transactionFlushed(ConnectionManagerImpl.java:450)
at org.datanucleus.TransactionImpl.flush(TransactionImpl.java:210)
at org.datanucleus.TransactionImpl.commit(TransactionImpl.java:274)
at org.datanucleus.api.jdo.JDOTransaction.commit(JDOTransaction.java:107)
at org.apache.hadoop.hive.metastore.ObjectStore.commitTransaction(ObjectStore.java:728)
at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101)
at com.sun.proxy.$Proxy27.commitTransaction(Unknown Source)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_core(HiveMetaStore.java:1821)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_with_environment_context(HiveMetaStore.java:1993)
at sun.reflect.GeneratedMethodAccessor83.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:140)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99)
at com.sun.proxy.$Proxy29.drop_table_with_environment_context(Unknown Source)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_table_with_environment_context.getResult(ThriftHiveMetastore.java:11157)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_table_with_environment_context.getResult(ThriftHiveMetastore.java:11141)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:594)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:589)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:589)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

2 ACCEPTED SOLUTIONS

avatar
New Contributor

No It did not help. Increasing hs2 java heap size? by how much should I increase and how far would I go increasing it? 

however below article helped

https://my.cloudera.com/knowledge/quotImpala-cannot-read-or-execute-the-parent-directory-of?id=91667

 

View solution in original post

avatar
Master Collaborator

@kvbigdata 

What happens if you set Cloudera Manager > Hive > Configuration > Service Monitor Client Config Overrides > Add

Name: hive.metastore.client.socket.timeout
Value: 600

We have provided the below information regarding the current situation of the canary jira . https://my.cloudera.com/knowledge/quotError--The-Hive-Metastore-canary-failed-to-create-a?id=337839

 

You have only one workaround ,that is to disable the canary test as you already did currently and that will not harm anything on your cluster . Currently the workaround is to disable the Canary tests on the HMS. 

 

# Access HIve Service
# Configuration Tab  >> Look for Hive Metastore Canary Health Test
# Uncheck the box
# Restart the service.

 

View solution in original post

6 REPLIES 6

avatar
Master Collaborator

You can try below. 

 

-increasing the heap size of this HiveServer2 role by changing the field Hive - Configuration - 'Java Heap Size of HiveServer2 in Bytes' to 24 GiB for this role instance.
- Increasing the time out for the hive service

To check further provide  

From the SERVICEMONITOR system, please do the following:
cd /var/log/cloudera-scm-firehose
gzip -c *SERVICEMONITOR*.out > sm.out.gz  

 

avatar
Community Manager

@kvbigdata Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks

 


Regards,

Diana Torres,
Community Moderator


Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.
Learn more about the Cloudera Community:

avatar
New Contributor

No It did not help. Increasing hs2 java heap size? by how much should I increase and how far would I go increasing it? 

however below article helped

https://my.cloudera.com/knowledge/quotImpala-cannot-read-or-execute-the-parent-directory-of?id=91667

 

avatar
Guru

@kvbigdata The canary test fails when HMS is busy ? Is it intermittent and does it fix by its own ?

avatar
New Contributor

please see my response above

avatar
Master Collaborator

@kvbigdata 

What happens if you set Cloudera Manager > Hive > Configuration > Service Monitor Client Config Overrides > Add

Name: hive.metastore.client.socket.timeout
Value: 600

We have provided the below information regarding the current situation of the canary jira . https://my.cloudera.com/knowledge/quotError--The-Hive-Metastore-canary-failed-to-create-a?id=337839

 

You have only one workaround ,that is to disable the canary test as you already did currently and that will not harm anything on your cluster . Currently the workaround is to disable the Canary tests on the HMS. 

 

# Access HIve Service
# Configuration Tab  >> Look for Hive Metastore Canary Health Test
# Uncheck the box
# Restart the service.