I'll try the local build on one of the datanodes, that shouldn't be a problem for what I'm testing.
It's the full dataset, but the original data was actually userID/prodDesc/weight ... I was informed by our security team that I could send the data if I changed the prodDesc to prodID, since it's pretty meaningless without lookup tables. So the Item variable went from a string when I was testing it, to a numeric; perhaps that's why I didn't see the same error.
So I'm wondering if the problem is only seen if the Item variable is a string ... easy way to test it would be to hash the prodID, which would give an alpha numeric string, similar in format to the original prodDesc.
I can hash the data and re-upload it, or you can run this little bit of python:
#!/usr/bin/python import csv,hashlib,sys,os,string INPUT_FILE = csv.reader(open("cloudera_data.csv","rb"), delimiter=",") OUTPUT_FILE = csv.writer(open("output.csv","wb"), delimiter=",") for data_lines in INPUT_FILE: data_lines = string.upper(hashlib.sha1(string.strip(str(data_lines),chars="\n")).hexdigest()) OUTPUT_FILE.writerow(data_lines) sys.exit(0)
Yes that explains why you didn't see the same initial problem. Well, good that was fixed anyhow.
Text vs numeric shouldn't matter at all. Underneath they are both hashed. Looks the amount of data and its nature are the same if it's just that IDs were hashed. I can't imagine collisions are an issue.
I tried converting these 1-1 to an ID that is alphanumeric, and it worked for me.
You are using CDH 4.x vs 5 right? could be a different, but still don't quite expect a problem would be of this form.
Anything else of interest in the logs? you're welcome to send me all of it.
You're starting from scratch when you run the test ?
I'm using CDH5 Beta 1, with Oryx compiled against the hadoop22 profile. Speaking of which, you may want to update the Build documentation on github, which states to use profile name "cdh5", but the pom.xml actually uses hadoop22 as the profile name.
I'll try running the test again tonight and see how it works out. If I see anything else, I'll send you the log output, but I'm hoping for the best!
And yes, every test is started from scratch, just in case!
I'm able to replicate this issue as well.
I've run through various combinations of lamba/feature pairs. No luck.
I'm running the latest CDH4 binaries.
Sean, would you like my data set?
Good news: I recompiled and gave it a whirl giving it 10 Features and .0001 Lambda as a first pass. Nothing abnormal or unusual jumps out at me in the output, so I believe the commits you made did the trick.
Generation 0 was successfully built, and it has passed the X/Y sufficient rank test. At first glance, the recommendations seem valid, if slightly skewed for the most popular items (which is expected). I obviously need to work on a rescorer to minimize the over-represented items. Next thing on the list is to see if it can be fit better, so I obviously need to come up with an automated unit test.
Have you built the Optimizer into the Oryx source code, by chance, or is it just in Myrrix?