Support Questions
Find answers, ask questions, and share your expertise
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

After hard crash cloudera manager not able to start


After hard crash cloudera manager not able to start

New Contributor

after restarting our cluster from hard crash, I am not able to start the vm it is giving post grepSQL error. the part of error is below:


2015-06-22 15:04:18,943 WARN [main:spi.SqlExceptionHelper@143] SQL Error: 0, SQLState: null
2015-06-22 15:04:18,944 ERROR [main:spi.SqlExceptionHelper@144] Connections could not be acquired from the underlying database!
2015-06-22 15:04:18,950 ERROR [main:bootstrap.EntityManagerFactoryBean@154] Unable to upgrade schema to latest version.
org.hibernate.exception.GenericJDBCException: Could not open connection



Caused by: java.sql.SQLException: Connections could not be acquired from the underlying database!
at com.mchange.v2.sql.SqlUtils.toSQLException(
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutPooledConnection(
at com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource.getConnection(
at org.hibernate.service.jdbc.connections.internal.C3P0ConnectionProvider.getConnection(
at org.hibernate.internal.AbstractSessionImpl$NonContextualJdbcConnectionAccess.obtainConnection(
at org.hibernate.engine.jdbc.internal.LogicalConnectionImpl.obtainConnection(
... 43 more
Caused by: com.mchange.v2.resourcepool.CannotAcquireResourceException: A ResourcePool could not acquire a resource from its primary factory or source.
at com.mchange.v2.resourcepool.BasicResourcePool.awaitAvailable(
at com.mchange.v2.resourcepool.BasicResourcePool.prelimCheckoutResource(
at com.mchange.v2.resourcepool.BasicResourcePool.checkoutResource(
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutPooledConnection(



So is it possible to start this cluster. AS I already tried many option available over internet. 


I found the fsimage and edit log in the secondary node. So can we use that image with logs ovwer new cluster so that our existing data remain intact and we can access them over hadoop.


Or any fesiable solution so that my existing data did not lose.


Thanks in advance for your valuable suggestion.

Don't have an account?
Coming from Hortonworks? Activate your account here