<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Read HBase Table by using Spark/Scala in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/Read-HBase-Table-by-using-Spark-Scala/m-p/164296#M126666</link>
    <description>&lt;P&gt;I found this link: &lt;STRONG&gt;How to run spark job to interact with secured HBase cluster &lt;/STRONG&gt;(https://community.hortonworks.com/articles/48988/how-to-run-spark-job-to-interact-with-secured-hbas.html), followed the instructions to setup and run the somketest, and got the error: &lt;STRONG&gt;Exception in thread "main" java.io.FileNotFoundException: File file:/usr/hdp/current/hbase-client/lib/guava*.jar does not exist.&lt;/STRONG&gt; Found the command and example for my original question but need some final touch. Can anybody shed some light on it?&lt;/P&gt;&lt;P&gt;I checked my VM HDP_2.4_vmware_v3 and the jar file /usr/hdp/current/hbase-client/lib/guava-12.0.1.jar is there. &lt;/P&gt;&lt;PRE&gt;./bin/spark-submit --class org.apache.spark.examples.HBaseTest --master yarn-cluster --num-executors 2 --driver-memory 512m --executor-memory 512m --executor-cores 1 --jars  /usr/hdp/current/hbase-client/lib/hbase-client.jar,/usr/hdp/current/hbase-client/lib/hbase-common.jar,/usr/hdp/current/hbase-client/lib/hbase-server.jar,/usr/hdp/current/hbase-client/lib/guava*.jar,/usr/hdp/current/hbase-client/lib/hbase-protocol.jar,/usr/hdp/current/hbase-client/lib/htrace-core*.jar  --files conf/hbase-site.xml ./lib/spark-examples*.jar ambarismoketest &lt;/PRE&gt;&lt;PRE&gt;16/08/07 16:22:12 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/08/07 16:22:13 INFO TimelineClientImpl: Timeline service address: &lt;A href="http://sandbox.hortonworks.com:8188/ws/v1/timeline/" target="_blank"&gt;http://sandbox.hortonworks.com:8188/ws/v1/timeline/&lt;/A&gt; 16/08/07 16:22:13 INFO RMProxy: Connecting to ResourceManager at sandbox.hortonworks.com/192.168.132.140:8050 16/08/07 16:22:14 WARN DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
16/08/07 16:22:15 INFO Client: Requesting a new application from cluster with 1 NodeManagers 16/08/07 16:22:15 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (2250 MB per container) 16/08/07 16:22:15 INFO Client: Will allocate AM container, with 896 MB memory including 384 MB overhead 16/08/07 16:22:15 INFO Client: Setting up container launch context for our AM
16/08/07 16:22:15 INFO Client: Setting up the launch environment for our AM container 16/08/07 16:22:15 INFO Client: Using the spark assembly jar on HDFS because you are using HDP, defaultSparkAssembly:hdfs://sandbox.hortonworks.com:8020/hdp/apps/2.4.0.0-169/spark/spark-hdp-assembly.jar
16/08/07 16:22:15 INFO Client: Preparing resources for our AM container 16/08/07 16:22:15 INFO Client: Using the spark assembly jar on HDFS because you are using HDP, defaultSparkAssembly:hdfs://sandbox.hortonworks.com:8020/hdp/apps/2.4.0.0-169/spark/spark-hdp-assembly.jar 16/08/07 16:22:15 INFO Client: Source and destination file systems are the same. Not copying hdfs://sandbox.hortonworks.com:8020/hdp/apps/2.4.0.0-169/spark/spark-hdp-assembly.jar 16/08/07 16:22:15 INFO Client: Uploading resource file:/usr/hdp/2.4.0.0-169/spark/lib/spark-examples-1.6.0.2.4.0.0-169-hadoop2.7.1.2.4.0.0-169.jar -&amp;gt; hdfs://sandbox.hortonworks.com:8020/user/root/.sparkStaging/application_1470585857897_0001/spark-examples-1.6.0.2.4.0.0-169-hadoop2.7.1.2.4.0.0-169.jar 16/08/07 16:22:18 INFO Client: Uploading resource file:/usr/hdp/current/hbase-client/lib/hbase-client.jar -&amp;gt; hdfs://sandbox.hortonworks.com:8020/user/root/.sparkStaging/application_1470585857897_0001/hbase-client.jar 16/08/07 16:22:18 INFO Client: Uploading resource file:/usr/hdp/current/hbase-client/lib/hbase-common.jar -&amp;gt; hdfs://sandbox.hortonworks.com:8020/user/root/.sparkStaging/application_1470585857897_0001/hbase-common.jar 16/08/07 16:22:18 INFO Client: Uploading resource file:/usr/hdp/current/hbase-client/lib/hbase-server.jar -&amp;gt; hdfs://sandbox.hortonworks.com:8020/user/root/.sparkStaging/application_1470585857897_0001/hbase-server.jar 16/08/07 16:22:18 INFO Client: Uploading resource file:/usr/hdp/current/hbase-client/lib/guava*.jar -&amp;gt; hdfs://sandbox.hortonworks.com:8020/user/root/.sparkStaging/application_1470585857897_0001/guava*.jar 16/08/07 16:22:18 INFO Client: Deleting staging directory .sparkStaging/application_1470585857897_0001 Exception in thread "main" java.io.FileNotFoundException: File file:/usr/hdp/current/hbase-client/lib/guava*.jar does not exist
        at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:609)
        at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:822)
        at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:599)
        at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:421)
        at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337)
        at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:289)
        at org.apache.spark.deploy.yarn.Client.copyFileToRemote(Client.scala:317)
        at org.apache.spark.deploy.yarn.Client.org$apache$spark$deploy$yarn$Client$distribute$1(Client.scala:407)
        at org.apache.spark.deploy.yarn.Client$anonfun$prepareLocalResources$6$anonfun$apply$3.apply(Client.scala:471)
        at org.apache.spark.deploy.yarn.Client$anonfun$prepareLocalResources$6$anonfun$apply$3.apply(Client.scala:470)
        at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
        at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
        at org.apache.spark.deploy.yarn.Client$anonfun$prepareLocalResources$6.apply(Client.scala:470)
        at org.apache.spark.deploy.yarn.Client$anonfun$prepareLocalResources$6.apply(Client.scala:468)
        at scala.collection.immutable.List.foreach(List.scala:318)
        at org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:468)
        at org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:722)
        at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:142)
        at org.apache.spark.deploy.yarn.Client.run(Client.scala:1065)
        at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1125)
        at org.apache.spark.deploy.yarn.Client.main(Client.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$runMain(SparkSubmit.scala:731)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)&lt;/PRE&gt;</description>
    <pubDate>Mon, 08 Aug 2016 00:39:05 GMT</pubDate>
    <dc:creator>Howchoy</dc:creator>
    <dc:date>2016-08-08T00:39:05Z</dc:date>
  </channel>
</rss>

