Member since
03-06-2016
49
Posts
38
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5736 | 03-10-2016 01:55 PM |
04-20-2016
11:36 AM
1 Kudo
Is it possible to submit multiple jobs in Spark-Submit ? I have created multiple jobs to be submitted in the spark-submit I have to submit these created jobs in a cluster deploy mode Is it possible to submit the jobs to desired cores in the cluster deploy mode? like job 1 - core 1 job 2 - cores 2 and 3 similarly.... Is it possible to schedule the jobs to the desired cores? Note: Cores should be assigned manually by the user
... View more
Labels:
- Labels:
-
Apache Spark
04-13-2016
02:14 AM
1 Kudo
I have a txt file say elec.txt in my HDFS (Hadoop Distributed File System) With the help of hadoop mapreduce and hive query I need to update the txt file Is it possible to do so? Is it possible to write a mapreduce program in hadoop to implement a HIVE Query?
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive
04-01-2016
05:11 PM
1 Kudo
I am trying to submit a job which is in target/main/scala, which is a jar file I submitted in the spark_home/bin directory with the following command, ./spark-submit --class "SimpleApp" --master local[4] /proj/target/main/scala/SimpleProj.jar All goes well, except the following errors Exception in thread "main" java.lang.NoSuchMethodError: scala.Predef$.$conforms()Lscala/Predef$$less$colon$less;
at com.databricks.spark.csv.util.CompressionCodecs$.<init>(CompressionCodecs.scala:29)
at com.databricks.spark.csv.util.CompressionCodecs$.<clinit>(CompressionCodecs.scala)
at com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:189)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:222)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:148)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:139)
at projupq$.main(projupq.scala:35)
at projupq.main(projupq.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Unable to rectify these errors, any help would be useful Thanks Sridhar
... View more
Labels:
- Labels:
-
Apache Spark
03-31-2016
08:27 AM
@Vadim It is working fine if I execute the below command spark-shell --packages com.databricks:spark-csv_2.10:1.3.0 by keeping the version as 1.3.0 and not 1.4.0
... View more
03-31-2016
12:38 AM
@Vadim I am getting the following errors after executing the statement agedPeopleDF.select("name", "age").write.format("com.databricks.spark.csv").mode(SaveMode.Overwrite).save("agedPeople") Output java.lang.NoSuchMethodError: scala.Predef$.$conforms()Lscala/Predef$less$colon$less;
at com.databricks.spark.csv.util.CompressionCodecs$.<init>(CompressionCodecs.scala:29)
at com.databricks.spark.csv.util.CompressionCodecs$.<clinit>(CompressionCodecs.scala)
at com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:189)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:222)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:148)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:139) ..............
... View more
03-30-2016
01:29 PM
Yup, this code is working fine, but after the execution of the above lines, the contents of /user/spark/people.txt still has age of justin as 19 justin, 19 Value is not modified.
... View more
03-30-2016
01:09 PM
@azeltov Thank you sir! I hope the exact above program can be written and executed in Scala, will try in doing so.
... View more
03-30-2016
01:08 PM
@Vadim working fine,but value is not changed in the people.txt file, it still remains 19. One more small doubt, will we be able to see how the job is distributed across the cluster using the Ambari UI. And Does SparkClient come as default, with the spark bundle? how to see the spark client in Web UI, like the localhost:port ?
... View more
03-25-2016
03:20 PM
2 Kudos
I have a txt file with the following data Michael, 29
Andy, 30
Justin, 19 These are the names of people, along with ages. I want to change the age of justin from 19 to 21. How to change the value of 19, in the spark-shell using spark-sql query? What are all the methods to be incorporated like map, reduce to modify the value of 19? Thanks Sridhar
... View more
Labels:
- Labels:
-
Apache Spark
03-25-2016
03:15 PM
Backend as default is Spark Sql, in the spark-shell I will be executing the Spark SQL queries. I have a people.txt file, which has data of names along with ages. I want to change the age of a particular name to some value.......... Is it possible to change the value in a txt file, using Spark-SQL query? Is it possible to modify the value during map and reduce commands in Spark? Note: I am not having HIVE installed.....
... View more