Support Questions

Find answers, ask questions, and share your expertise

HiveMetaStore ClassNotFoundException

avatar
Explorer

Hi all,

I installed Hive using Cloudera Manager

 

Version: Cloudera Standard 4.7.3 (#163 built by jenkins on 20131030-1651 git: 73aaa4b1d5948460ab9ae67fc12426aefb84b43c)

 

Whenever I try and start the Hive Meta Store through the UI, I get the following error (from stderr)

 

+ exec /opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/lib/hive/bin/hive --config /run/cloudera-scm-agent/process/1470-hive-HIVEMETASTORE --service metastore -p 9083
Exception in thread "main" java.lang.ClassNotFoundException: org.apache.hadoop.hive.metastore.HiveMetaStore
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:247)
at org.apache.hadoop.util.RunJar.main(RunJar.java:201)

 

Anyone encounter this before?

thanks

dave

1 ACCEPTED SOLUTION

avatar

Hi Dave,

 

This is usually caused because you have set a bad safety valve for MapReduce or Yarn client enviornment, usually because you installed LZO and made a mistake when the safety valve for MR to pick up the parcel.

 

Here's the docs for using the LZO parcel:

http://www.cloudera.com/content/cloudera-content/cloudera-docs/CM4Ent/4.6.2/Cloudera-Manager-Install...

 

The mistake people often make is forgetting to append to the existing value for HADOOP_CLASSPATH or any other variable. Since Hive uses MR client configs, when it sources haddop-env.sh it will have its classpath overwritten by your MR client env safety valve.

 

So this is bad for client environment safety valves:

HADOOP_CLASSPATH=/my/new/stuff

and this is good:

HADOOP_CLASSPATH=$HADOOP_CLASSPATH:/my/new/stuff

 

Thanks,

Darren

View solution in original post

2 REPLIES 2

avatar

Hi Dave,

 

This is usually caused because you have set a bad safety valve for MapReduce or Yarn client enviornment, usually because you installed LZO and made a mistake when the safety valve for MR to pick up the parcel.

 

Here's the docs for using the LZO parcel:

http://www.cloudera.com/content/cloudera-content/cloudera-docs/CM4Ent/4.6.2/Cloudera-Manager-Install...

 

The mistake people often make is forgetting to append to the existing value for HADOOP_CLASSPATH or any other variable. Since Hive uses MR client configs, when it sources haddop-env.sh it will have its classpath overwritten by your MR client env safety valve.

 

So this is bad for client environment safety valves:

HADOOP_CLASSPATH=/my/new/stuff

and this is good:

HADOOP_CLASSPATH=$HADOOP_CLASSPATH:/my/new/stuff

 

Thanks,

Darren

avatar
Explorer
That was absolutely it Darren. Someone attempted to install LZO on this dev cluster and did it improperly.

Thanks much!

dave