Member since
02-02-2016
583
Posts
518
Kudos Received
98
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3174 | 09-16-2016 11:56 AM | |
1354 | 09-13-2016 08:47 PM | |
5344 | 09-06-2016 11:00 AM | |
3093 | 08-05-2016 11:51 AM | |
5169 | 08-03-2016 02:58 PM |
04-25-2016
06:56 PM
1 Kudo
@fnu rasool You are missing "hadoop-yarn-api.jar" jar file, below is the location where you can find the jar. /usr/hdp/<version>/hadoop-yarn//hadoop-yarn-api-<version>.jar
... View more
04-25-2016
08:07 AM
@Amit Dass Hi Amit, Just checking if you are still facing this issue or not, incase my answer was useful for you then please click on accepted button on my answer.
... View more
04-24-2016
02:06 PM
@JR Cao Hi, It means that either your cluster doesn't have sufficient resources or in some cases non of node managers connected to the resource manager. Please cross check your resource manager UI and look for "Memory used", "Memory Total" and "Active Nodes" section. http://<RM IP>:8088 see attached screenshot.
... View more
04-24-2016
01:30 PM
1 Kudo
@wayne2chicago Here is the simplest way to find conf parameter values through command line. Bash# hdfs getconf -confKey yarn.resourcemanager.work-preserving-recovery.enabled
... View more
04-22-2016
07:03 PM
@AKILA VEL Thanks for confirming, can you please click on accept button on my answer? Regarding build jar, I have provided my answer on your another question here https://community.hortonworks.com/questions/28962/how-to-create-jar-file-from-spark-scala-file.html#answer-28966
... View more
04-22-2016
06:03 PM
@Amit Dass Please accept my answer if this is now resolved after my suggestion.
... View more
04-22-2016
02:56 PM
@AKILA VEL
Here is a sample program. import org.apache.spark._
import org.apache.spark.SparkContext._
object WordCount {
def main(args: Array[String]) {
val conf = new SparkConf().setAppName("wordCount")
// Create a Scala Spark Context.
val sc = new SparkContext(conf)
// Load our input data.
val input = sc.textFile("/user/test/input/data.txt")
// Split up into words.
val words = input.flatMap(line => line.split(" "))
// Transform into word and count.
val counts = words.map(word => (word, 1)).reduceByKey{case (x, y) => x + y}
// Save the word count back out to a text file, causing evaluation.
counts.saveAsTextFile("/user/test/output")
}
}
spark-shell --master yarn-client
scala>:load <file path>
scala>WordCount.main(null)
... View more
04-22-2016
02:35 PM
@AKILA VEL you can run the .scala file directly on spark-shell . :load PATH_TO_FILE
... View more
04-22-2016
02:15 PM
1 Kudo
Try to execute below command and once you get the spark sc context shell prompt then run sparkwordcount.scala on it. bash# spark-shell --master yarn-client
... View more
04-22-2016
02:11 PM
1 Kudo
@AKILA VEL Though there are many ways to do that but you can use sbt tool to build your application jar, below is a good example doc to build a jar and run it on spark https://jaceklaskowski.gitbooks.io/mastering-apache-spark/content/spark-first-app.html
... View more