Member since
07-29-2013
366
Posts
69
Kudos Received
71
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5089 | 03-09-2016 01:21 AM | |
4304 | 03-07-2016 01:52 AM | |
13548 | 02-29-2016 04:40 AM | |
4024 | 02-22-2016 03:08 PM | |
5022 | 01-19-2016 02:13 PM |
08-05-2014
03:42 AM
2 Kudos
Why? in a kerberized environment, to access resources you need to integrate with kerberos. The Spark project hasn't implemented anything like that. YARN works with kerberos, and so it can work with kerberos by leveraging YARN. Maybe part of the answer is, why is it necessary if it works through YARN?
... View more
08-04-2014
12:16 PM
Thanks Sean.. I'm currently computing uniques visitors per page and running a count distinct using SparkSQL. We also run the non-spark jobs on the cluster, so if we allocate the 2GB I'm assuming we can't run any other jobs simultaneously. Also, I'm also looking to see how to set the storage levels in CM.
... View more
08-03-2014
02:54 AM
1 Kudo
The method is "textFile" not "textfile" https://spark.apache.org/docs/1.0.0/api/scala/index.html#org.apache.spark.SparkContext
... View more
07-29-2014
03:59 AM
1 Kudo
Bad news: not directly. the design goal here is real-time scoring. You could write a process that queries an embedded Serving Layer, or, calls to one via HTTP. It's a bit more overhead, but certainly works. The bulk recommend function is a hold-over from the older code base, really. There wasn't an equivalent for classification. Good news: since the output is a PMML model, and libraries like openscoring exist, you could fairly easily wire up a Mapper that loads a model and scores data.
... View more
07-03-2014
03:56 AM
Thanks maestro.
... View more
06-30-2014
07:44 AM
Perfect - Hadoop home was pointing to the wrong place, that was being picked up. I am able to submit applications just fine now. Thanks.
... View more
05-21-2014
07:53 AM
Thank you for your effort. No, this file is not empty: here you can check it part-r-00000 I would like to see all the vectors with information about cluster for each. It would be nice to see also centers of the clusters. I changed IntWritable key = new IntWritable();
WeightedPropertyVectorWritable value = new WeightedPropertyVectorWritable(); to this Text key = new Text();
ClusterWritable value = new ClusterWritable(); I have not got any exception but the oputput is: org.apache.mahout.clustering.iterator.ClusterWritable@572c4a12 belongs to cluster C-0
org.apache.mahout.clustering.iterator.ClusterWritable@572c4a12 belongs to cluster C-1 --- EDIT: I changed value.toString() to value.getValue() and now, I have got an output: C-0: {0:0.07,1:0.9499999999999998} belongs to cluster C-0
C-1: {0:12.25,1:12.9} belongs to cluster C-1 Thank you very much !!!!
... View more
04-28-2014
10:47 PM
You are so kind. Thanks for your help.
... View more