Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

hive-hbase-handler-0.12.0-cdh5.0.0-beta-2.jar

hive-hbase-handler-0.12.0-cdh5.0.0-beta-2.jar

New Contributor

when I execute an HQL which including "WHERE" key-word, there is always something wrong:

Execution log at: /tmp/hadoop/hadoop_20140216223838_c3a1f6fd-4996-4f1f-9990-0771877ba276.log
java.io.FileNotFoundException: File does not exist: hdfs://hadoop-01:8020/opt/cloudera/parcels/CDH-5.0.0-0.cdh5b2.p0.27/lib/hive/lib/hive-hbase-handler-0.12.0-cdh5.0.0-beta-2.jar
at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1116)
at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1108)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1108)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:93)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57)
at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:265)
at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:301)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:389)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1301)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1298)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1298)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548)
at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:425)
at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.main(ExecDriver.java:733)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Job Submission failed with exception 'java.io.FileNotFoundException(File does not exist: hdfs://hadoop-01:8020/opt/cloudera/parcels/CDH-5.0.0-0.cdh5b2.p0.27/lib/hive/lib/hive-hbase-handler-0.12.0-cdh5.0.0-beta-2.jar)'

 

and /opt/cloudera/parcels/CDH-5.0.0-0.cdh5b2.p0.27/lib/hive/ really does exit hive-hbase-handler-0.12.0-cdh5.0.0-beta-2.jar!!!

13 REPLIES 13

Re: hive-hbase-handler-0.12.0-cdh5.0.0-beta-2.jar

Hello,

 

Kindly check log file carefully, it says when you execute HQL it is trying to find jar file in HDFS location and you are message says you have checked it in your local machine's path. Try to set your classpath properly or upload jar file in mentioned hdfs location and check your hql execution.

Regards,
Chirag Patadia.

Re: hive-hbase-handler-0.12.0-cdh5.0.0-beta-2.jar

New Contributor

Hi,

 

I have a similar error. It complains about

 

Job Submission failed with exception 'java.io.FileNotFoundException(File does not exist: hdfs://namenode:8020/opt/cloudera/parcels/CDH-5.0.0-1.cdh5.0.0.p0.47/lib/hive/lib/hive-hbase-handler-0.12.0-cdh5.0.0.jar)'

 

I did not specify in the hive-site.xml to use the jars in hadoop. instead, those dependent jars should be from local file sytem. Does anyone know how to fix it? We are running CDH5 GA version.

 

regards,

 

james

Highlighted

Re: hive-hbase-handler-0.12.0-cdh5.0.0-beta-2.jar

New Contributor

Did you get an answer for this?  The Cloudera VM exhibits the same problem

Re: hive-hbase-handler-0.12.0-cdh5.0.0-beta-2.jar

New Contributor

I am also facing the same issue.

When ever I execute the following hql query I get the same error:

select count(*) from my_table;

 

Error: 

java.io.FileNotFoundException: File does not exist: hdfs://<namenodehost>:8020/var/cloudera/parcels/CDH-5.0.0-1.cdh5.0.0.p0.47/lib/hive/lib/hive-hbase-handler-0.12.0-cdh5.0.0.jar
at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1128)

 

I am not sure why hive is looking for this file in hdfs where as this jar is present in every datanode local file system. I did not change any of the default hive configuration. I am using CDH5 hive installed through cloudera manager.

 

Can someone from cloudera help us on this?

 

Thanks

Sourabh

Re: hive-hbase-handler-0.12.0-cdh5.0.0-beta-2.jar

New Contributor

I think the CDH5.0.1 resolved this problem.

 

james

Re: hive-hbase-handler-0.12.0-cdh5.0.0-beta-2.jar

New Contributor

Let me add few more details regarding this:

 

In my cluster I have added LZO parcel and modified few config params.

If I remove these LZO related configuration and use the deafault values provided by CM , I am able to execute 

select count(*) from my_table; 

 

Can you please provide any pointer, why hive is looking for hive-hbase-handler-0.12.0-cdh5.0.0.jar in hdfs when I add LZO config?

Config changes for LZO:

YARN:

 

Service-Wide / Advanced

YARN Service MapReduce Advanced Configuration Snippet (Safety Valve)

Value:

<property>

  <name>mapreduce.application.classpath</name>

  <value>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*,$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*,/var/cloudera/parcels/HADOOP_LZO/lib/hadoop/lib/*</value>

</property>

<property>

  <name>mapreduce.admin.user.env</name>

  <value>LD_LIBRARY_PATH=$HADOOP_COMMON_HOME/lib/native:/var/cloudera/parcels/HADOOP_LZO/lib/hadoop/lib/native</value>

</property>

 

Gateway Default Group / Advanced

Gateway Client Environment Advanced Configuration Snippet for hadoop-env.sh (Safety Valve)

 

HADOOP_CLASSPATH=$HADOOP_CLASSPATH:/opt/cloudera/parcels/HADOOP_LZO/lib/hadoop/lib/*

JAVA_LIBRARY_PATH=$JAVA_LIBRARY_PATH:/opt/cloudera/parcels/HADOOP_LZO/lib/hadoop/lib/native

 

HDFS:

 

io.compression.codecs

Add:

com.hadoop.compression.lzo.LzopCodec

com.hadoop.compression.lzo.LzoCodec

 

 

 

Re: hive-hbase-handler-0.12.0-cdh5.0.0-beta-2.jar

New Contributor

I am using CDH5.0.1 and HIVE version -

 

Hive

0.12.0+cdh5.0.1+315

 

I am trying to run HIVE  query from hive cli - " select id  from hivedb.emp;"   its giving error -

 

Job Submission failed with exception 'java.io.FileNotFoundException(File does not exist: hdfs://cm.cloudera.com:8020/opt/cloudera/parcels/CDH-5.0.1-1.cdh5.0.1.p0.47/lib/hive/lib/hive-hbase-handler-0.12.0-cdh5.0.1.jar)'

 

It seems the issue is not fixed with CDH5.0.1.

 

Same query (" select id  from hivedb.emp;") while running through HUE beeswax console runs fine.

 

 

Can anyone please let me know the temporary fix  or permanant resolution for the above issue.

This is priority for me.

 

Thanks in advance.

Re: hive-hbase-handler-0.12.0-cdh5.0.0-beta-2.jar

New Contributor

I developed a shell script that uses the hive CLI to execute a script with 3 statements (DDL, DDL, DML). This works perfectly when run from the shell on a cluster node, but when I run it from Oozie (via Hue) the hive script fails on the DML statement.

 

java.io.FileNotFoundException: File does not exist: hdfs://nameservice/opt/cloudera/parcels/CDH-5.0.2-1.cdh5.0.2.p0.13/lib/hive/lib/hive-hbase-handler-0.12.0-cdh5.0.2.jar
	at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1128)
	at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1120)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1120)
	at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288)
	at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224)
	at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:93)
	at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57)
	at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:265)
	at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:301)
	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:389)
[...]

Apparently the ClientDistributedCacheManager is confused about the filesystems. I suppose this is because the mapreduce job for the DML statement is being submitted from within a mapreduce job.

 

How should I configure the oozie workflow or the shell script to enable this?

Re: hive-hbase-handler-0.12.0-cdh5.0.0-beta-2.jar

New Contributor

Any update on this? We are having the same issue with 5.0.2

Don't have an account?
Coming from Hortonworks? Activate your account here