Member since
07-29-2013
366
Posts
69
Kudos Received
71
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4998 | 03-09-2016 01:21 AM | |
4260 | 03-07-2016 01:52 AM | |
13382 | 02-29-2016 04:40 AM | |
3973 | 02-22-2016 03:08 PM | |
4966 | 01-19-2016 02:13 PM |
06-30-2014
06:43 AM
Can you share the exact command you are running? The link you supplied goes through a redirector we can't access, and the important part got cut off. Maybe you can clarify what page you are looking at under the installation guide.
... View more
06-18-2014
11:14 PM
master is the host:port where the Spark master is running. This will be up to your cluster configuration of course, and I don't know your machine name, but the default port is 18080. masterHostname does not seem used in your code. sparkHome may not need to be set, but if it does, refers to the /opt/parcels/.../lib/spark directory where CDH is installed.
... View more
05-28-2014
04:57 AM
The proximate problem is that some host name is configured as "user", which doesn't sound like a host name. I would first look through all the host-related settings in your Hadoop conf to see where "user" appears and identify if any of these should be something else.
... View more
05-21-2014
07:46 AM
1 Kudo
OK, is the file nonempty? I think the data is not in the format you expect then. From skimming the code, it looks like the output is Text + ClusterWritable, not IntWritable + WeightedPropertyVectorWritable. You are trying to print the cluster centroids, right?
... View more
05-21-2014
07:06 AM
1 Kudo
I think you have found the right file then, but it is saying that it did not generate cluster centers. Maybe the data is too small. This might be better as a question on the Mahout mailing list as to what that means.
... View more
05-21-2014
06:13 AM
1 Kudo
Yeah that's why I'm confused here. Is it not hte part-r-00000 that likely has the data? The format is a binary serialization and you can't open it as if it is a text file.
... View more
05-21-2014
04:27 AM
1 Kudo
Do the files have data in them? I would double-check that they are not 0-length, but I doubt it. What directory do you find the files in? I suspect its name is like "part-m-00000" but your code appears to be listing "part-m-0"
... View more
04-28-2014
10:16 PM
(If you encounter an error, you should state what the error is, but I assume it is "artifact not found" here.) Add this to your pom.xml file at the end so that Maven knows to look in the Cloudera repo: <repositories> <repository> <id>cloudera.repo</id> <url>https://repository.cloudera.com/artifactory/cloudera-repos</url> <name>Cloudera Repositories</name> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories>
... View more
04-28-2014
10:05 PM
The example you see right on the project page is a good simple way to get started: https://github.com/cloudera/oryx#collaborative-filtering--recommender-example
... View more
04-25-2014
06:46 AM
If I'm not mistaken, you have built your own application that embeds code from Mahout 0.8. That is not compatible with Hadoop 2, and CDH5 is based on Hadoop 2.3+. It also includes a distribution of Mahout 0.8 that has been modified to work on Hadoop 2. To make this work, you can try to depend on the 0.8 distribution of Mahout from CDH 5 in your project instead, since it contains necessary modifications. For example, if building with Maven, instead of specifying version 0.8 for artifact org.apache.mahout:mahout-core you would specify 0.8-cdh5.0.0 . You would also need to reference the Cloudera repo in your build. You could also recompile Mahout locally for Hadoop 2, but I think this is the most trouble.
... View more