<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Spark program in eclipse in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-program-in-eclipse/m-p/38087#M21221</link>
    <description>Thanks srowen!</description>
    <pubDate>Mon, 29 Feb 2016 12:49:30 GMT</pubDate>
    <dc:creator>Orson</dc:creator>
    <dc:date>2016-02-29T12:49:30Z</dc:date>
    <item>
      <title>Spark program in eclipse</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-program-in-eclipse/m-p/38085#M21219</link>
      <description>&lt;P&gt;Hi All,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Am trying to create a simple spark program in eclipse. Unfortunately, im getting an Out of Memory Error (&lt;FONT face="courier new,courier"&gt;Exception in thread "main" java.lang.OutOfMemoryError: PermGen space&lt;/FONT&gt;)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Here's the configuration of my &lt;FONT face="courier new,courier"&gt;ini&lt;/FONT&gt; file&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" size="2"&gt;--launcher.XXMaxPermSize&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" size="2"&gt;256m&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" size="2"&gt;--launcher.defaultAction&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" size="2"&gt;openFile&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" size="2"&gt;-vmargs&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" size="2"&gt;-Xms512m&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" size="2"&gt;-Xmx1024m&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" size="2"&gt;-XX:+UseParallelGC&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" size="2"&gt;-XX:PermSize=8g&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier" size="2"&gt;-XX:MaxPermSize=10g&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;FONT face="arial,helvetica,sans-serif" size="3"&gt;&lt;SPAN&gt;Run configurations &amp;gt; Arguments :&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT face="courier new,courier" size="2"&gt;&lt;SPAN&gt;-Xmx10g&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;FONT face="arial,helvetica,sans-serif" size="3"&gt;&lt;SPAN&gt;Scala code :&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT face="courier new,courier" size="2"&gt;&lt;SPAN&gt;...&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT face="courier new,courier" size="2"&gt;&lt;SPAN&gt;val sqlContext = new HiveContext(spark)&lt;BR /&gt;sqlContext.sql("SELECT * from sample_csv limit 1")&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT face="courier new,courier" size="2"&gt;&lt;SPAN&gt;...&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT face="arial,helvetica,sans-serif" size="3"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT face="arial,helvetica,sans-serif" size="3"&gt;&lt;SPAN&gt;Logs :&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT face="courier new,courier" size="2"&gt;&lt;SPAN&gt;Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties&lt;BR /&gt;16/02/29 20:11:46 INFO SparkContext: Running Spark version 1.6.0&lt;BR /&gt;16/02/29 20:11:56 INFO SecurityManager: Changing view acls to: Orson&lt;BR /&gt;16/02/29 20:11:56 INFO SecurityManager: Changing modify acls to: Orson&lt;BR /&gt;16/02/29 20:11:56 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(Orson); users with modify permissions: Set(Orson)&lt;BR /&gt;16/02/29 20:11:57 INFO Utils: Successfully started service 'sparkDriver' on port 57135.&lt;BR /&gt;16/02/29 20:11:57 INFO Slf4jLogger: Slf4jLogger started&lt;BR /&gt;16/02/29 20:11:58 INFO Remoting: Starting remoting&lt;BR /&gt;16/02/29 20:11:58 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@192.168.181.1:57148]&lt;BR /&gt;16/02/29 20:11:58 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 57148.&lt;BR /&gt;16/02/29 20:11:58 INFO SparkEnv: Registering MapOutputTracker&lt;BR /&gt;16/02/29 20:11:58 INFO SparkEnv: Registering BlockManagerMaster&lt;BR /&gt;16/02/29 20:11:58 INFO DiskBlockManager: Created local directory at C:\Users\Orson\AppData\Local\Temp\blockmgr-be56133f-c657-4146-9e19-cfae46545b70&lt;BR /&gt;16/02/29 20:11:58 INFO MemoryStore: MemoryStore started with capacity 6.4 GB&lt;BR /&gt;16/02/29 20:11:58 INFO SparkEnv: Registering OutputCommitCoordinator&lt;BR /&gt;16/02/29 20:11:58 INFO Utils: Successfully started service 'SparkUI' on port 4040.&lt;BR /&gt;16/02/29 20:11:58 INFO SparkUI: Started SparkUI at &lt;A href="http://192.168.181.1:4040" target="_blank"&gt;http://192.168.181.1:4040&lt;/A&gt;&lt;BR /&gt;16/02/29 20:11:58 INFO Executor: Starting executor ID driver on host localhost&lt;BR /&gt;16/02/29 20:11:58 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 57155.&lt;BR /&gt;16/02/29 20:11:58 INFO NettyBlockTransferService: Server created on 57155&lt;BR /&gt;16/02/29 20:11:58 INFO BlockManagerMaster: Trying to register BlockManager&lt;BR /&gt;16/02/29 20:11:58 INFO BlockManagerMasterEndpoint: Registering block manager localhost:57155 with 6.4 GB RAM, BlockManagerId(driver, localhost, 57155)&lt;BR /&gt;16/02/29 20:11:58 INFO BlockManagerMaster: Registered BlockManager&lt;BR /&gt;16/02/29 20:12:00 INFO HiveContext: Initializing execution hive, version 1.2.1&lt;BR /&gt;16/02/29 20:12:00 INFO ClientWrapper: Inspected Hadoop version: 2.2.0&lt;BR /&gt;16/02/29 20:12:00 INFO ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.2.0&lt;BR /&gt;16/02/29 20:12:00 INFO deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces&lt;BR /&gt;16/02/29 20:12:00 INFO deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize&lt;BR /&gt;16/02/29 20:12:00 INFO deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative&lt;BR /&gt;16/02/29 20:12:00 INFO deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node&lt;BR /&gt;16/02/29 20:12:00 INFO deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive&lt;BR /&gt;16/02/29 20:12:00 INFO deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack&lt;BR /&gt;16/02/29 20:12:00 INFO deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize&lt;BR /&gt;16/02/29 20:12:00 INFO deprecation: mapred.committer.job.setup.cleanup.needed is deprecated. Instead, use mapreduce.job.committer.setup.cleanup.needed&lt;BR /&gt;16/02/29 20:12:00 WARN HiveConf: HiveConf of name hive.enable.spark.execution.engine does not exist&lt;BR /&gt;16/02/29 20:12:00 INFO HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore&lt;BR /&gt;16/02/29 20:12:00 INFO ObjectStore: ObjectStore, initialize called&lt;BR /&gt;16/02/29 20:12:01 INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored&lt;BR /&gt;16/02/29 20:12:01 INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored&lt;BR /&gt;16/02/29 20:12:11 WARN HiveConf: HiveConf of name hive.enable.spark.execution.engine does not exist&lt;BR /&gt;16/02/29 20:12:11 INFO ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"&lt;BR /&gt;16/02/29 20:12:13 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.&lt;BR /&gt;16/02/29 20:12:13 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.&lt;BR /&gt;16/02/29 20:12:19 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.&lt;BR /&gt;16/02/29 20:12:19 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.&lt;BR /&gt;16/02/29 20:12:21 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY&lt;BR /&gt;16/02/29 20:12:21 INFO ObjectStore: Initialized ObjectStore&lt;BR /&gt;16/02/29 20:12:21 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0&lt;BR /&gt;16/02/29 20:12:22 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException&lt;BR /&gt;16/02/29 20:12:24 WARN : Your hostname, solvento-orson resolves to a loopback/non-reachable address: fe80:0:0:0:0:5efe:c0a8:4801%42, but we couldn't find any external IP address!&lt;BR /&gt;16/02/29 20:12:25 INFO HiveMetaStore: Added admin role in metastore&lt;BR /&gt;16/02/29 20:12:25 INFO HiveMetaStore: Added public role in metastore&lt;BR /&gt;16/02/29 20:12:26 INFO HiveMetaStore: No user is added in admin role, since config is empty&lt;BR /&gt;16/02/29 20:12:26 INFO HiveMetaStore: 0: get_all_databases&lt;BR /&gt;16/02/29 20:12:26 INFO audit: ugi=Orson ip=unknown-ip-addr cmd=get_all_databases&lt;BR /&gt;16/02/29 20:12:26 INFO HiveMetaStore: 0: get_functions: db=default pat=*&lt;BR /&gt;16/02/29 20:12:26 INFO audit: ugi=Orson ip=unknown-ip-addr cmd=get_functions: db=default pat=*&lt;BR /&gt;16/02/29 20:12:26 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table.&lt;BR /&gt;16/02/29 20:12:28 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable&lt;BR /&gt;16/02/29 20:12:28 INFO SessionState: Created local directory: C:/Users/Orson/AppData/Local/Temp/0c1b1e0d-5e6c-47b8-a8d5-5398e262c874_resources&lt;BR /&gt;16/02/29 20:12:28 INFO SessionState: Created HDFS directory: /tmp/hive/Orson/0c1b1e0d-5e6c-47b8-a8d5-5398e262c874&lt;BR /&gt;16/02/29 20:12:28 INFO SessionState: Created local directory: C:/Users/Orson/AppData/Local/Temp/Orson/0c1b1e0d-5e6c-47b8-a8d5-5398e262c874&lt;BR /&gt;16/02/29 20:12:28 INFO SessionState: Created HDFS directory: /tmp/hive/Orson/0c1b1e0d-5e6c-47b8-a8d5-5398e262c874/_tmp_space.db&lt;BR /&gt;16/02/29 20:12:28 WARN HiveConf: HiveConf of name hive.enable.spark.execution.engine does not exist&lt;BR /&gt;16/02/29 20:12:28 INFO HiveContext: default warehouse location is /user/hive/warehouse&lt;BR /&gt;16/02/29 20:12:28 INFO HiveContext: Initializing HiveMetastoreConnection version 1.2.1 using Spark classes.&lt;BR /&gt;16/02/29 20:12:28 INFO ClientWrapper: Inspected Hadoop version: 2.2.0&lt;BR /&gt;16/02/29 20:12:28 INFO ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.2.0&lt;BR /&gt;16/02/29 20:12:29 INFO deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces&lt;BR /&gt;16/02/29 20:12:29 INFO deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize&lt;BR /&gt;16/02/29 20:12:29 INFO deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative&lt;BR /&gt;16/02/29 20:12:29 INFO deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node&lt;BR /&gt;16/02/29 20:12:29 INFO deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive&lt;BR /&gt;16/02/29 20:12:29 INFO deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack&lt;BR /&gt;16/02/29 20:12:29 INFO deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize&lt;BR /&gt;16/02/29 20:12:29 INFO deprecation: mapred.committer.job.setup.cleanup.needed is deprecated. Instead, use mapreduce.job.committer.setup.cleanup.needed&lt;BR /&gt;16/02/29 20:12:29 WARN HiveConf: HiveConf of name hive.enable.spark.execution.engine does not exist&lt;BR /&gt;16/02/29 20:12:29 INFO metastore: Trying to connect to metastore with URI thrift://quickstart.cloudera:9083&lt;BR /&gt;16/02/29 20:12:29 INFO metastore: Connected to metastore.&lt;BR /&gt;16/02/29 20:12:29 INFO SessionState: Created local directory: C:/Users/Orson/AppData/Local/Temp/ccfc9462-2c5a-49ce-a811-503694353c1a_resources&lt;BR /&gt;16/02/29 20:12:30 INFO SessionState: Created HDFS directory: /tmp/hive/Orson/ccfc9462-2c5a-49ce-a811-503694353c1a&lt;BR /&gt;16/02/29 20:12:30 INFO SessionState: Created local directory: C:/Users/Orson/AppData/Local/Temp/Orson/ccfc9462-2c5a-49ce-a811-503694353c1a&lt;BR /&gt;16/02/29 20:12:30 INFO SessionState: Created HDFS directory: /tmp/hive/Orson/ccfc9462-2c5a-49ce-a811-503694353c1a/_tmp_space.db&lt;BR /&gt;16/02/29 20:12:30 INFO ParseDriver: Parsing command: SELECT * from sample_csv limit 1&lt;BR /&gt;Exception in thread "main" java.lang.OutOfMemoryError: PermGen space&lt;BR /&gt;at java.lang.ClassLoader.defineClass1(Native Method)&lt;BR /&gt;at java.lang.ClassLoader.defineClass(ClassLoader.java:800)&lt;BR /&gt;at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)&lt;BR /&gt;at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)&lt;BR /&gt;at java.net.URLClassLoader.access$100(URLClassLoader.java:71)&lt;BR /&gt;at java.net.URLClassLoader$1.run(URLClassLoader.java:361)&lt;BR /&gt;at java.net.URLClassLoader$1.run(URLClassLoader.java:355)&lt;BR /&gt;at java.security.AccessController.doPrivileged(Native Method)&lt;BR /&gt;at java.net.URLClassLoader.findClass(URLClassLoader.java:354)&lt;BR /&gt;at java.lang.ClassLoader.loadClass(ClassLoader.java:425)&lt;BR /&gt;at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)&lt;BR /&gt;at java.lang.ClassLoader.loadClass(ClassLoader.java:358)&lt;BR /&gt;at org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.&amp;lt;init&amp;gt;(HiveParser_IdentifiersParser.java:12377)&lt;BR /&gt;at org.apache.hadoop.hive.ql.parse.HiveParser.&amp;lt;init&amp;gt;(HiveParser.java:706)&lt;BR /&gt;at org.apache.hadoop.hive.ql.parse.HiveParser.&amp;lt;init&amp;gt;(HiveParser.java:700)&lt;BR /&gt;at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:195)&lt;BR /&gt;at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:166)&lt;BR /&gt;at org.apache.spark.sql.hive.HiveQl$.getAst(HiveQl.scala:276)&lt;BR /&gt;at org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:303)&lt;BR /&gt;at org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:41)&lt;BR /&gt;at org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:40)&lt;BR /&gt;at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:137)&lt;BR /&gt;at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)&lt;BR /&gt;at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:237)&lt;BR /&gt;at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:237)&lt;BR /&gt;at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:217)&lt;BR /&gt;at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:249)&lt;BR /&gt;at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:249)&lt;BR /&gt;at scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:197)&lt;BR /&gt;at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:249)&lt;BR /&gt;at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:249)&lt;BR /&gt;at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:217)&lt;BR /&gt;16/02/29 20:12:32 INFO SparkContext: Invoking stop() from shutdown hook&lt;BR /&gt;16/02/29 20:12:32 INFO SparkUI: Stopped Spark web UI at &lt;A href="http://192.168.181.1:4040" target="_blank"&gt;http://192.168.181.1:4040&lt;/A&gt;&lt;BR /&gt;16/02/29 20:12:32 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!&lt;BR /&gt;16/02/29 20:12:32 INFO MemoryStore: MemoryStore cleared&lt;BR /&gt;16/02/29 20:12:32 INFO BlockManager: BlockManager stopped&lt;BR /&gt;16/02/29 20:12:32 INFO BlockManagerMaster: BlockManagerMaster stopped&lt;BR /&gt;16/02/29 20:12:32 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!&lt;BR /&gt;16/02/29 20:12:32 INFO SparkContext: Successfully stopped SparkContext&lt;BR /&gt;16/02/29 20:12:32 INFO ShutdownHookManager: Shutdown hook called&lt;BR /&gt;16/02/29 20:12:32 INFO ShutdownHookManager: Deleting directory C:\Users\Orson\AppData\Local\Temp\spark-7efe7a3c-e47c-41a0-8e94-a1fd19ca7197&lt;BR /&gt;16/02/29 20:12:32 INFO ShutdownHookManager: Deleting directory C:\Users\Orson\AppData\Local\Temp\spark-b20f169a-0a3b-426e-985e-6641b3be3fd6&lt;BR /&gt;16/02/29 20:12:32 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.&lt;BR /&gt;16/02/29 20:12:32 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.&lt;BR /&gt;16/02/29 20:12:32 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down.&lt;BR /&gt;16/02/29 20:12:32 ERROR ShutdownHookManager: Exception while deleting Spark temp dir: C:\Users\Orson\AppData\Local\Temp\spark-b20f169a-0a3b-426e-985e-6641b3be3fd6&lt;BR /&gt;java.io.IOException: Failed to delete: C:\Users\Orson\AppData\Local\Temp\spark-b20f169a-0a3b-426e-985e-6641b3be3fd6&lt;BR /&gt;at org.apache.spark.util.Utils$.deleteRecursively(Utils.scala:928)&lt;BR /&gt;at org.apache.spark.util.ShutdownHookManager$$anonfun$1$$anonfun$apply$mcV$sp$3.apply(ShutdownHookManager.scala:65)&lt;BR /&gt;at org.apache.spark.util.ShutdownHookManager$$anonfun$1$$anonfun$apply$mcV$sp$3.apply(ShutdownHookManager.scala:62)&lt;BR /&gt;at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)&lt;BR /&gt;at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)&lt;BR /&gt;at org.apache.spark.util.ShutdownHookManager$$anonfun$1.apply$mcV$sp(ShutdownHookManager.scala:62)&lt;BR /&gt;at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:267)&lt;BR /&gt;at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ShutdownHookManager.scala:239)&lt;BR /&gt;at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:239)&lt;BR /&gt;at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:239)&lt;BR /&gt;at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1741)&lt;BR /&gt;at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply$mcV$sp(ShutdownHookManager.scala:239)&lt;BR /&gt;at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:239)&lt;BR /&gt;at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:239)&lt;BR /&gt;at scala.util.Try$.apply(Try.scala:191)&lt;BR /&gt;at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:239)&lt;BR /&gt;at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:218)&lt;BR /&gt;at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)&lt;BR /&gt;&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;FONT face="arial,helvetica,sans-serif" size="3"&gt;&lt;SPAN&gt;Thanks!&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 16 Sep 2022 10:06:16 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-program-in-eclipse/m-p/38085#M21219</guid>
      <dc:creator>Orson</dc:creator>
      <dc:date>2022-09-16T10:06:16Z</dc:date>
    </item>
    <item>
      <title>Re: Spark program in eclipse</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-program-in-eclipse/m-p/38086#M21220</link>
      <description>The error points to the problem -- you have perhaps plenty of memory&lt;BR /&gt;but not enough permgen space in the JVM. Try something like&lt;BR /&gt;-XX:MaxPermSize=2g in your JVM options to executors&lt;BR /&gt;</description>
      <pubDate>Mon, 29 Feb 2016 12:40:14 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-program-in-eclipse/m-p/38086#M21220</guid>
      <dc:creator>srowen</dc:creator>
      <dc:date>2016-02-29T12:40:14Z</dc:date>
    </item>
    <item>
      <title>Re: Spark program in eclipse</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-program-in-eclipse/m-p/38087#M21221</link>
      <description>Thanks srowen!</description>
      <pubDate>Mon, 29 Feb 2016 12:49:30 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-program-in-eclipse/m-p/38087#M21221</guid>
      <dc:creator>Orson</dc:creator>
      <dc:date>2016-02-29T12:49:30Z</dc:date>
    </item>
    <item>
      <title>Re: Spark program in eclipse</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-program-in-eclipse/m-p/43490#M21222</link>
      <description>&lt;P&gt;Hello guys,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;When we build a spark application, we usually export it as a jar and run it on the cluster. Is there a way we can run the application on the cluster directly from eclipse (with some setting)? This would be very effecient to test/debug. So just wondering if there is anything out there.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;</description>
      <pubDate>Wed, 03 Aug 2016 14:37:26 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-program-in-eclipse/m-p/43490#M21222</guid>
      <dc:creator>uzi</dc:creator>
      <dc:date>2016-08-03T14:37:26Z</dc:date>
    </item>
    <item>
      <title>Re: Spark program in eclipse</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-program-in-eclipse/m-p/43491#M21223</link>
      <description>&lt;P&gt;You don't need to export as a JAR for unit testing. You can do :&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;SparkConf().setMaster(local[2]) and run the program&amp;nbsp;as usual Java application in IDE.&amp;nbsp;&amp;nbsp;Also make sure that you have all the dependent libraries in the classpath.&lt;/P&gt;</description>
      <pubDate>Wed, 03 Aug 2016 15:08:32 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-program-in-eclipse/m-p/43491#M21223</guid>
      <dc:creator>_Umesh</dc:creator>
      <dc:date>2016-08-03T15:08:32Z</dc:date>
    </item>
    <item>
      <title>Re: Spark program in eclipse</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-program-in-eclipse/m-p/44788#M21224</link>
      <description>&lt;P&gt;When we are using hive context (hive tables) or phoenix tables with in our spark application it is very difficult ( as a matter of fact i think it is impossible with out going through point less installation in the local machine) to run the application locally through eclipse.&lt;/P&gt;&lt;P&gt;Anyways, I was looking for something like this&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;A href="http://www.dbengineering.info/2016/09/debug-spark-application-running-on-cloud.html" target="_blank"&gt;http://www.dbengineering.info/2016/09/debug-spark-application-running-on-cloud.html&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;where we are able to run it on debug mode. Anyways, For the moment I am happy with this. Just sharing incase if someone else has the same question I had few months ago.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 07 Sep 2016 01:41:19 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-program-in-eclipse/m-p/44788#M21224</guid>
      <dc:creator>uzi</dc:creator>
      <dc:date>2016-09-07T01:41:19Z</dc:date>
    </item>
    <item>
      <title>Re: Spark program in eclipse</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-program-in-eclipse/m-p/54304#M21225</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;i am facing the below mentioned issue. Please help me to solve it&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;17/05/02 11:07:13 ERROR ShutdownHookManager: Exception while deleting Spark temp dir: C:\Users\arpitbh\AppData\Local\Temp\spark-07d9637a-2eb8-4a32-8490-01e106a80d6b&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;java.io.IOException: Failed to delete: C:\Users\arpitbh\AppData\Local\Temp\spark-07d9637a-2eb8-4a32-8490-01e106a80d6b&lt;/STRONG&gt;&lt;BR /&gt;at org.apache.spark.util.Utils$.deleteRecursively(Utils.scala:1010)&lt;BR /&gt;at org.apache.spark.util.ShutdownHookManager$$anonfun$1$$anonfun$apply$mcV$sp$3.apply(ShutdownHookManager.scala:65)&lt;BR /&gt;at org.apache.spark.util.ShutdownHookManager$$anonfun$1$$anonfun$apply$mcV$sp$3.apply(ShutdownHookManager.scala:62)&lt;BR /&gt;at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)&lt;BR /&gt;at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)&lt;BR /&gt;at org.apache.spark.util.ShutdownHookManager$$anonfun$1.apply$mcV$sp(ShutdownHookManager.scala:62)&lt;BR /&gt;at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:216)&lt;BR /&gt;at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ShutdownHookManager.scala:188)&lt;BR /&gt;at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188)&lt;BR /&gt;at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188)&lt;BR /&gt;at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1951)&lt;BR /&gt;at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply$mcV$sp(ShutdownHookManager.scala:188)&lt;BR /&gt;at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188)&lt;BR /&gt;at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188)&lt;BR /&gt;at scala.util.Try$.apply(Try.scala:192)&lt;BR /&gt;at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:188)&lt;BR /&gt;at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:178)&lt;BR /&gt;at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)&lt;/P&gt;</description>
      <pubDate>Tue, 02 May 2017 06:11:43 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-program-in-eclipse/m-p/54304#M21225</guid>
      <dc:creator>arpitbh</dc:creator>
      <dc:date>2017-05-02T06:11:43Z</dc:date>
    </item>
  </channel>
</rss>

