<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Unable to run Spark Job in clutser mode in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/Unable-to-run-Spark-Job-in-clutser-mode/m-p/153144#M115609</link>
    <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;I'm able to run the job in client mode but unable to run the same job in cluster mode. Can someone please help me? Below is the error message. &lt;/P&gt;&lt;PRE&gt;16/06/22 21:57:10 ERROR yarn.ApplicationMaster: User class threw exception: java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
        at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:346)
        at org.apache.spark.sql.hive.client.ClientWrapper.&amp;lt;init&amp;gt;(ClientWrapper.scala:117)
        at org.apache.spark.sql.hive.HiveContext.executionHive$lzycompute(HiveContext.scala:165)
        at org.apache.spark.sql.hive.HiveContext.executionHive(HiveContext.scala:163)
        at org.apache.spark.sql.hive.HiveContext.&amp;lt;init&amp;gt;(HiveContext.scala:170)
        at DisplayAnalysisForecast$.main(DisplayAnalysisForecast.scala:35)
        at DisplayAnalysisForecast.main(DisplayAnalysisForecast.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:486)
Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
        at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1412)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.&amp;lt;init&amp;gt;(RetryingMetaStoreClient.java:62)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:72)
        at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2453)
        at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2465)
        at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:340)
        ... 11 more
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1410)
        ... 16 more
Caused by: javax.jdo.JDOFatalUserException: Class org.datanucleus.api.jdo.JDOPersistenceManagerFactory was not found.
NestedThrowables:
java.lang.ClassNotFoundException: org.datanucleus.api.jdo.JDOPersistenceManagerFactory
        at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1175)
        at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808)
        at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701)
        at org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:310)
        at org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:339)
        at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:248)
        at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:223)
        at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
        at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
        at org.apache.hadoop.hive.metastore.RawStoreProxy.&amp;lt;init&amp;gt;(RawStoreProxy.java:58)
        at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:67)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:497)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:475)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:523)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:397)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.&amp;lt;init&amp;gt;(HiveMetaStore.java:356)
        at org.apache.hadoop.hive.metastore.RetryingHMSHandler.&amp;lt;init&amp;gt;(RetryingHMSHandler.java:54)
        at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:59)
        at org.apache.hadoop.hive.metastore.HiveMetaStore.newHMSHandler(HiveMetaStore.java:4944)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.&amp;lt;init&amp;gt;(HiveMetaStoreClient.java:171)
        ... 21 more
Caused by: java.lang.ClassNotFoundException: org.datanucleus.api.jdo.JDOPersistenceManagerFactory
        at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:270)
        at javax.jdo.JDOHelper$18.run(JDOHelper.java:2018)
        at javax.jdo.JDOHelper$18.run(JDOHelper.java:2016)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.jdo.JDOHelper.forName(JDOHelper.java:2015)
        at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1162)
        ... 40 more
16/06/22 21:57:10 INFO yarn.ApplicationMaster: Final app status: FAILED, exitCode: 15, (reason: User class threw exception: java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient)
16/06/22 21:57:18 ERROR yarn.ApplicationMaster: SparkContext did not initialize after waiting for 100000 ms. Please check earlier log output for errors. Failing the application.
16/06/22 21:57:18 INFO spark.SparkContext: Invoking stop() from shutdown hook


&lt;/PRE&gt;&lt;P&gt;Using Spark 1.4 and running on Hadoop cluster only. Any help is highly appreciated and thanks in advance.&lt;/P&gt;</description>
    <pubDate>Thu, 23 Jun 2016 10:03:09 GMT</pubDate>
    <dc:creator>bandarusridhar1</dc:creator>
    <dc:date>2016-06-23T10:03:09Z</dc:date>
    <item>
      <title>Unable to run Spark Job in clutser mode</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Unable-to-run-Spark-Job-in-clutser-mode/m-p/153144#M115609</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;I'm able to run the job in client mode but unable to run the same job in cluster mode. Can someone please help me? Below is the error message. &lt;/P&gt;&lt;PRE&gt;16/06/22 21:57:10 ERROR yarn.ApplicationMaster: User class threw exception: java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
        at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:346)
        at org.apache.spark.sql.hive.client.ClientWrapper.&amp;lt;init&amp;gt;(ClientWrapper.scala:117)
        at org.apache.spark.sql.hive.HiveContext.executionHive$lzycompute(HiveContext.scala:165)
        at org.apache.spark.sql.hive.HiveContext.executionHive(HiveContext.scala:163)
        at org.apache.spark.sql.hive.HiveContext.&amp;lt;init&amp;gt;(HiveContext.scala:170)
        at DisplayAnalysisForecast$.main(DisplayAnalysisForecast.scala:35)
        at DisplayAnalysisForecast.main(DisplayAnalysisForecast.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:486)
Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
        at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1412)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.&amp;lt;init&amp;gt;(RetryingMetaStoreClient.java:62)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:72)
        at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2453)
        at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2465)
        at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:340)
        ... 11 more
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1410)
        ... 16 more
Caused by: javax.jdo.JDOFatalUserException: Class org.datanucleus.api.jdo.JDOPersistenceManagerFactory was not found.
NestedThrowables:
java.lang.ClassNotFoundException: org.datanucleus.api.jdo.JDOPersistenceManagerFactory
        at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1175)
        at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808)
        at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701)
        at org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:310)
        at org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:339)
        at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:248)
        at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:223)
        at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
        at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
        at org.apache.hadoop.hive.metastore.RawStoreProxy.&amp;lt;init&amp;gt;(RawStoreProxy.java:58)
        at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:67)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:497)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:475)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:523)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:397)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.&amp;lt;init&amp;gt;(HiveMetaStore.java:356)
        at org.apache.hadoop.hive.metastore.RetryingHMSHandler.&amp;lt;init&amp;gt;(RetryingHMSHandler.java:54)
        at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:59)
        at org.apache.hadoop.hive.metastore.HiveMetaStore.newHMSHandler(HiveMetaStore.java:4944)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.&amp;lt;init&amp;gt;(HiveMetaStoreClient.java:171)
        ... 21 more
Caused by: java.lang.ClassNotFoundException: org.datanucleus.api.jdo.JDOPersistenceManagerFactory
        at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:270)
        at javax.jdo.JDOHelper$18.run(JDOHelper.java:2018)
        at javax.jdo.JDOHelper$18.run(JDOHelper.java:2016)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.jdo.JDOHelper.forName(JDOHelper.java:2015)
        at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1162)
        ... 40 more
16/06/22 21:57:10 INFO yarn.ApplicationMaster: Final app status: FAILED, exitCode: 15, (reason: User class threw exception: java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient)
16/06/22 21:57:18 ERROR yarn.ApplicationMaster: SparkContext did not initialize after waiting for 100000 ms. Please check earlier log output for errors. Failing the application.
16/06/22 21:57:18 INFO spark.SparkContext: Invoking stop() from shutdown hook


&lt;/PRE&gt;&lt;P&gt;Using Spark 1.4 and running on Hadoop cluster only. Any help is highly appreciated and thanks in advance.&lt;/P&gt;</description>
      <pubDate>Thu, 23 Jun 2016 10:03:09 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Unable-to-run-Spark-Job-in-clutser-mode/m-p/153144#M115609</guid>
      <dc:creator>bandarusridhar1</dc:creator>
      <dc:date>2016-06-23T10:03:09Z</dc:date>
    </item>
    <item>
      <title>Re: Unable to run Spark Job in clutser mode</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Unable-to-run-Spark-Job-in-clutser-mode/m-p/153145#M115610</link>
      <description>&lt;DIV&gt;&lt;A rel="user" href="https://community.cloudera.com/users/5746/bandarusridhar1.html" nodeid="5746"&gt;@Sri  Bandaru&lt;/A&gt; &lt;/DIV&gt;&lt;OL&gt;
&lt;LI&gt;Check the hive-site.xml contents. Should be like as below for spark.&lt;/LI&gt;&lt;LI&gt;Add hive-site.xml to the driver-classpath so that spark can read hive configuration. Make sure —files must come before you .jar file&lt;/LI&gt;&lt;LI&gt;Add the datanucleus jars using --jars option when you submit &lt;/LI&gt;&lt;LI&gt;Check the contents of hive-site.xml&lt;/LI&gt;
&lt;LI&gt;&lt;PRE&gt; &amp;lt;configuration&amp;gt;
    &amp;lt;property&amp;gt;
      &amp;lt;name&amp;gt;hive.metastore.uris&amp;lt;/name&amp;gt;
      &amp;lt;value&amp;gt;thrift://sandbox.hortonworks.com:9083&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
  &amp;lt;/configuration&amp;gt;&lt;/PRE&gt;
&lt;/LI&gt;&lt;LI&gt;The Seq. of command&lt;/LI&gt;&lt;/OL&gt;&lt;OL&gt;
&lt;LI&gt;spark-submit \&lt;/LI&gt;&lt;LI&gt;--class &amp;lt;Your.class.name&amp;gt; \&lt;/LI&gt;&lt;LI&gt;--master yarn-cluster \&lt;/LI&gt;&lt;LI&gt;--num-executors 1 \&lt;/LI&gt;&lt;LI&gt;--driver-memory 1g \&lt;/LI&gt;&lt;LI&gt;--executor-memory 1g \&lt;/LI&gt;&lt;LI&gt;--executor-cores 1 \&lt;/LI&gt;&lt;LI&gt;--files /usr/hdp/current/spark-client/conf/hive-site.xml \&lt;/LI&gt;&lt;LI&gt;--jars /usr/hdp/current/spark-client/lib/datanucleus-api-jdo-3.2.6.jar,/usr/hdp/current/spark-client/lib/datanucleus-rdbms-3.2.9.jar,/usr/hdp/current/spark-client/lib/datanucleus-core-3.2.10.jar \&lt;/LI&gt;&lt;LI&gt; target/YOUR_JAR-1.0.0-SNAPSHOT.jar "show tables""select * from your_table"&lt;/LI&gt;&lt;/OL&gt;</description>
      <pubDate>Thu, 23 Jun 2016 10:31:20 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Unable-to-run-Spark-Job-in-clutser-mode/m-p/153145#M115610</guid>
      <dc:creator>GeeKay2015</dc:creator>
      <dc:date>2016-06-23T10:31:20Z</dc:date>
    </item>
    <item>
      <title>Re: Unable to run Spark Job in clutser mode</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Unable-to-run-Spark-Job-in-clutser-mode/m-p/153146#M115611</link>
      <description>&lt;P&gt;&lt;A rel="user" href="#"&gt;@Gangadhar Kadam&lt;/A&gt; &lt;/P&gt;&lt;P&gt;Thanks for the quick response, HA is enabled for HiveServer2 and we are pointing to two thrift servers at hive.metastore.uris&lt;/P&gt;&lt;P&gt;I have already &lt;A href="https://community.hortonworks.com/questions/5798/spark-hive-tables-not-found-when-running-in-yarn-c.html"&gt;followed&lt;/A&gt; all the steps, but that doesn't help my scenario. I'm able to run the same job in client mode. &lt;/P&gt;</description>
      <pubDate>Thu, 23 Jun 2016 11:14:30 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Unable-to-run-Spark-Job-in-clutser-mode/m-p/153146#M115611</guid>
      <dc:creator>bandarusridhar1</dc:creator>
      <dc:date>2016-06-23T11:14:30Z</dc:date>
    </item>
    <item>
      <title>Re: Unable to run Spark Job in clutser mode</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Unable-to-run-Spark-Job-in-clutser-mode/m-p/153147#M115612</link>
      <description>&lt;P&gt;can you share your code.&lt;/P&gt;</description>
      <pubDate>Thu, 23 Jun 2016 11:18:28 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Unable-to-run-Spark-Job-in-clutser-mode/m-p/153147#M115612</guid>
      <dc:creator>GeeKay2015</dc:creator>
      <dc:date>2016-06-23T11:18:28Z</dc:date>
    </item>
    <item>
      <title>Re: Unable to run Spark Job in clutser mode</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Unable-to-run-Spark-Job-in-clutser-mode/m-p/153148#M115613</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/971/gkadam2011.html" nodeid="971"&gt;@Gangadhar Kadam&lt;/A&gt; &lt;/P&gt;&lt;PRE&gt;spark-submit --class org.apache.spark.examples.SparkPi --master yarn --deploy-mode cluster --executor-memory 1G --num-executors 1 -driver-memory 2G --executor-memory 2G --jars /usr/hdp/current/spark-client/lib/datanucleus-api-jdo-3.2.6.jar,/usr/hdp/current/spark-client/lib/datanucleus-rdbms-3.2.9.jar,/usr/hdp/current/spark-client/lib/datanucleus-core-3.2.10.jar  --files /usr/hdp/current/spark-client/conf/hive-site.xml /usr/hdp/current/spark-client/lib/spark-examples-1.4.1.2.3.2.0-2950-hadoop2.7.1.2.3.2.0-2950.jar 10&lt;/PRE&gt;</description>
      <pubDate>Thu, 23 Jun 2016 11:36:11 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Unable-to-run-Spark-Job-in-clutser-mode/m-p/153148#M115613</guid>
      <dc:creator>bandarusridhar1</dc:creator>
      <dc:date>2016-06-23T11:36:11Z</dc:date>
    </item>
    <item>
      <title>Re: Unable to run Spark Job in clutser mode</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Unable-to-run-Spark-Job-in-clutser-mode/m-p/153149#M115614</link>
      <description>&lt;P&gt;did you tried --file before the --jar?&lt;/P&gt;</description>
      <pubDate>Thu, 23 Jun 2016 12:17:56 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Unable-to-run-Spark-Job-in-clutser-mode/m-p/153149#M115614</guid>
      <dc:creator>GeeKay2015</dc:creator>
      <dc:date>2016-06-23T12:17:56Z</dc:date>
    </item>
    <item>
      <title>Re: Unable to run Spark Job in clutser mode</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Unable-to-run-Spark-Job-in-clutser-mode/m-p/153150#M115615</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/971/gkadam2011.html" nodeid="971"&gt;@Gangadhar Kadam&lt;/A&gt; &lt;/P&gt;&lt;P&gt;Yes, this time I got different error,&lt;/P&gt;&lt;PRE&gt;ERROR yarn.ApplicationMaster: SparkContext did not initialize after waiting for 100000 ms. Please check earlier log output for errors. Failing the application.&lt;/PRE&gt;</description>
      <pubDate>Thu, 23 Jun 2016 21:55:46 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Unable-to-run-Spark-Job-in-clutser-mode/m-p/153150#M115615</guid>
      <dc:creator>bandarusridhar1</dc:creator>
      <dc:date>2016-06-23T21:55:46Z</dc:date>
    </item>
    <item>
      <title>Re: Unable to run Spark Job in clutser mode</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Unable-to-run-Spark-Job-in-clutser-mode/m-p/153151#M115616</link>
      <description>&lt;P&gt;Suggest solution:&lt;/P&gt;&lt;P&gt;1. As discovered, we are hitting a bug with HDP 2.3.2 with Ambari 2.2.1 :https://hortonworks.jira.com/browse/BUG-56393 where starting from Ambari 2.2.1 , it does not manage the spark version if HDP stack is &amp;lt; HDP 2.3.4 &lt;/P&gt;&lt;PRE&gt;spark.driver.extraJavaOptions =-Dhdp.version={{hdp_full_version}}
spark.yarn.am.extraJavaOptions=-Dhdp.version={{hdp_full_version}}&lt;/PRE&gt;&lt;P&gt;2. Right now we have a workaround where we have modified the property value and hard coded the right HDP version. &lt;/P&gt;&lt;PRE&gt;spark.driver.extraJavaOptions =-Dhdp.version=2.3.2.0-2950
spark.yarn.am.extraJavaOptions=-Dhdp.version=2.3.2.0-2950&lt;/PRE&gt;&lt;P&gt;Currently, Spark pi jobs are running fine&lt;/P&gt;</description>
      <pubDate>Fri, 29 Jul 2016 22:19:32 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Unable-to-run-Spark-Job-in-clutser-mode/m-p/153151#M115616</guid>
      <dc:creator>bandarusridhar1</dc:creator>
      <dc:date>2016-07-29T22:19:32Z</dc:date>
    </item>
    <item>
      <title>Re: Unable to run Spark Job in clutser mode</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Unable-to-run-Spark-Job-in-clutser-mode/m-p/153152#M115617</link>
      <description>&lt;P&gt;&lt;A href="https://community.hortonworks.com/users/971/gkadam2011.html"&gt;@Gangadhar Kadam&lt;/A&gt; Thank you.. Steps you mentioned help me to resolve the problem&lt;/P&gt;,&lt;P&gt;Thank you @&lt;A href="https://community.hortonworks.com/users/971/gkadam2011.html"&gt;Gangadhar Kadam&lt;/A&gt; ..Steps you provided help us to resolve the issue&lt;/P&gt;</description>
      <pubDate>Thu, 11 Jan 2018 01:19:39 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Unable-to-run-Spark-Job-in-clutser-mode/m-p/153152#M115617</guid>
      <dc:creator>gurjeet_maini</dc:creator>
      <dc:date>2018-01-11T01:19:39Z</dc:date>
    </item>
  </channel>
</rss>

