Reply
New Contributor
Posts: 2
Registered: ‎06-22-2015

After hard crash cloudera manager not able to start

[ Edited ]

after restarting our cluster from hard crash, I am not able to start the vm it is giving post grepSQL error. the part of error is below:

 

2015-06-22 15:04:18,943 WARN [main:spi.SqlExceptionHelper@143] SQL Error: 0, SQLState: null
2015-06-22 15:04:18,944 ERROR [main:spi.SqlExceptionHelper@144] Connections could not be acquired from the underlying database!
2015-06-22 15:04:18,950 ERROR [main:bootstrap.EntityManagerFactoryBean@154] Unable to upgrade schema to latest version.
org.hibernate.exception.GenericJDBCException: Could not open connection

..

..

Caused by: java.sql.SQLException: Connections could not be acquired from the underlying database!
at com.mchange.v2.sql.SqlUtils.toSQLException(SqlUtils.java:106)
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutPooledConnection(C3P0PooledConnectionPool.java:529)
at com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource.getConnection(AbstractPoolBackedDataSource.java:128)
at org.hibernate.service.jdbc.connections.internal.C3P0ConnectionProvider.getConnection(C3P0ConnectionProvider.java:84)
at org.hibernate.internal.AbstractSessionImpl$NonContextualJdbcConnectionAccess.obtainConnection(AbstractSessionImpl.java:281)
at org.hibernate.engine.jdbc.internal.LogicalConnectionImpl.obtainConnection(LogicalConnectionImpl.java:297)
... 43 more
Caused by: com.mchange.v2.resourcepool.CannotAcquireResourceException: A ResourcePool could not acquire a resource from its primary factory or source.
at com.mchange.v2.resourcepool.BasicResourcePool.awaitAvailable(BasicResourcePool.java:1319)
at com.mchange.v2.resourcepool.BasicResourcePool.prelimCheckoutResource(BasicResourcePool.java:557)
at com.mchange.v2.resourcepool.BasicResourcePool.checkoutResource(BasicResourcePool.java:477)
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutPooledConnection(C3P0PooledConnectionPool.java:525)

 

 

So is it possible to start this cluster. AS I already tried many option available over internet. 

 

I found the fsimage and edit log in the secondary node. So can we use that image with logs ovwer new cluster so that our existing data remain intact and we can access them over hadoop.

 

Or any fesiable solution so that my existing data did not lose.

 

Thanks in advance for your valuable suggestion.

Announcements

Our community is getting a little larger. And a lot better.


Learn More about the Cloudera and Hortonworks community merger planned for late July and early August.