Member since
04-05-2016
37
Posts
8
Kudos Received
9
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2847 | 07-30-2019 11:52 PM | |
5270 | 06-07-2019 01:01 AM | |
9163 | 04-14-2017 08:31 PM | |
5604 | 08-03-2016 12:52 AM | |
2892 | 06-22-2016 02:10 AM |
06-21-2016
11:35 PM
This looks strange. Your console output listed the below lines com.databricks#spark-avro_2.10 added as a dependency
org.apache.avro#avro-mapred added as a dependency Can you try once with : --packages com.databricks:spark-avro_2.10:1.0.0,org.apache.avro:avro-mapred:1.6.3 I can sense some version compatibility issues of avro-mapred with spark-avro.
... View more
06-15-2016
04:55 AM
Try starting spark-shell with following packages: --packages com.databricks:spark-avro_2.10:2.0.1,org.apache.avro:avro-mapred:1.7.7
... View more
06-10-2016
03:46 AM
What Python version you are using. You may want to refer : http://www.cloudera.com/documentation/enterprise/5-5-x/topics/spark_ipython.html
... View more
05-30-2016
03:48 AM
1 Kudo
See the Environment tab of Job History UI and locate "spark.local.dir". Yes that is the expected behaviour as JAR is required to the executors.
... View more
05-30-2016
01:05 AM
This looks weird. And can you confirm that http://192.168.88.28:55310/jars/phoenix-1.2.0-client.jar is still not present? Spark keeps all JARs specified by --jars option in job's temp directory on each executor nodes [1]. There must be some sort of OS settings which lead the deletion of existing phoenix jar from temp and when Spark Context is unable to find it at its usual location it tries to download it from the given location. However this should not happen until the temp directory is actively accessed by the job or process. You can try bundling that JAR with your Spark JAR and then refer it in spark-submit. I suspect, you will need again 20 odd days to test this workaround 🙂
... View more
05-11-2016
03:47 AM
1 Kudo
You are messing with createPollingStream method. Give 198.168.1.31 as sink address as below and it should work. FlumeUtils.createPollingStream(ssc,"198.168.1.31",8020)
... View more
05-11-2016
01:52 AM
Add below dependency as well: groupId = org.apache.spark
artifactId = spark-streaming-flume_2.10
version = 1.6.1 See here for pull based configuration.
... View more
04-20-2016
03:29 AM
1 Kudo
CM is supporting single version for Spark on YARN and single version for Standalone installation (Single version is common requirement). For supporting multiple versions of Spark you need to install it manually on a single node and copy the config files for YARN and Hive inside its conf directory. And when you refer the spark-submit of that version, it will distribute the Spark-core binary on each YARN nodes to execute your code. You don't need to install Spark on each YARN nodes.
... View more
04-19-2016
04:31 AM
Yes, YARN provides this flexibility. Here you can find the detailed answer. For CDH there is a "Spark" service, which meant for YARN and another is "Spark Standalone" service which runs it's daemons standalone on the specified nodes. YARN will do the work for you if you want to test the multiple versions simultaneously. You should have your multiple versions on Gateway Host and then you can launch Spark applications from there.
... View more
04-07-2016
04:26 AM
1 Kudo
Thats because you have no new files arriving in the directory after streaming application starts. You can try "cp" to drop files in the directory after starting the streaming application.
... View more
- « Previous
-
- 1
- 2
- Next »