Support Questions

Find answers, ask questions, and share your expertise

Blueprint installed cluster failing to start Hive and Oozie using custom Postgres RDBMS

avatar
Explorer

Hey guys,

I have a multi-node HDP2.5.3 cluster installed with a second Postgres 9.2.18 server hosting metadata for Hive and Oozie. The blueprint used create it was an unmodifed export from a working cluster of the same physical specification. I've checked connections from around the cluster to the RDBMS from each node's service via cli tools "psql" the org.postgres.Driver is selected and the jdbc.jar is on all the nodes (proven to work from the original cluster config).

Two errors turn up for each service:

resource_management.core.exceptions.ExecutionFailed: Execution of '/opt/jdk1.8.0_77/bin/java -cp /usr/lib/ambari-agent/DBConnectionVerification.jar:/usr/hdp/current/oozie-server/libserver/postgresql-9.0-801.jdbc4.jar org.apache.ambari.server.DBConnectionVerification 'jdbc:derby:${oozie.data.dir}/${oozie.db.schema.name}-db;create=true' oozie [PROTECTED] org.postgresql.Driver' returned 1. ERROR: Unable to connect to the DB. Please check DB connection properties.

java.sql.SQLException: No suitable driver found for jdbc:derby:${oozie.data.dir}/${oozie.db.schema.name}-db;create=true

rewritten:

/opt/jdk1.8.0_77/bin/java 
-cp /usr/lib/ambari-agent/DBConnectionVerification.jar:/usr/hdp/current/oozie-server/libserver/postgresql-9.0-801.jdbc4.jar
org.apache.ambari.server.DBConnectionVerification
'jdbc:derby:${oozie.data.dir}/${oozie.db.schema.name}-db;create=true'
oozie [PROTECTED] org.postgresql.Driver

failed - I wonder why there's a reference to jdbc:derby: and ${oozie.data.dir}/${oozie.db.schema.name}-db, a schema I didn't create for oozie, mine was schema = "oozie" or at least database = "oozie"

for hive its a problem getting schema bindings:

resource_management.core.exceptions.ExecutionFailed: Execution of 'export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; /usr/hdp/current/hive-server2-hive2/bin/schematool -initSchema -dbType postgres -userName hive -passWord [PROTECTED] -verbose' returned 1. SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.5.3.0-37/hive2/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.5.3.0-37/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Metastore connection URL:    jdbc:postgresql://%HOSTGROUP::host_group_2%:5432/hive
Metastore Connection Driver :    org.postgresql.Driver
Metastore connection User:    hive
org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version.
Underlying cause: org.postgresql.util.PSQLException : The connection attempt failed.
SQL Error code: 0
org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version.

after unzipping one jar to remove the duplicated class files (Log4jLoggerFactory.class,Log4jMDCAdapter.class,StaticLoggerBinder.class,StaticMarkerBinder.class,StaticMDCBinder.class)

I placed the new jar back into the /usr/hdp/2.5.3.0-37/hive2/lib/log4j-slf4j-impl-2.6.2.jar location. It then returned the same error as was experienced with Oozie above.

1 ACCEPTED SOLUTION

avatar
Explorer

I got one solution sorted -> the exported blueprint contained %HOST_GROUP% that needed to be substituted. I still have {{}} and ${} that I need to figure how to handle be for submitting it to the system...

View solution in original post

1 REPLY 1

avatar
Explorer

I got one solution sorted -> the exported blueprint contained %HOST_GROUP% that needed to be substituted. I still have {{}} and ${} that I need to figure how to handle be for submitting it to the system...