07-31-2013 09:07 PM
I'm trying to use my local installation of Cloudera Quickstart VM to do a small mapreduce job in Python.
My test script works when I explicitly add python to the script:
# cat inputfile.txt | python mymapper.py | sort | python myreducer.py
I need to add python to the path in the vm. What's the best way to do this so it finds python from the command line and in Hadoop? I haven't been successful trying to find and modify the right files in the Cloudera VM.
(I was able to run this on AWS. I tried from the hadoop command line also:
hadoop jar /usr/lib/hadoop-0.20-mapreduce/contrib/streaming/h
-input inputfile.txt \
-output output010 \
-mapper mymapper.py \
-file mymapper.py \
-combiner myreducer.py \
-reducer myreducer.py \
... and it fails)
Any help to get the right would be appreciated.
Solved! Go to Solution.
08-01-2013 10:07 AM
Try inserting the header "#!/usr/bin/env python" as the first line in your scripts. This signals to the operating system that your scripts are executable through Python. If you do this in your local example (and do "chmod +x *.py"), it works without having to add python to the script:
cat inputfile.txt | ./mymapper.py | sort | ./myreducer.py
Copy the modified files back into HDFS and MapReduce will now be able to execute your mappers and reducers.
08-01-2013 01:43 PM
Thanks. I rebooted, reconstructed new files and again tried both #!/usr/bin/env python and #!/usr/bin/python and changed permissions to include -x .
I'm making it through the file, mymapper, and sort, but I'm getting "no such file or directory" when I pipe it to ./myreducer.py
But when I explicitly add "python" as the executable it works.
I'm guessing this is some obvious newbie issue (new to linux) but I should have this in the bag by now.
08-01-2013 02:04 PM
08-01-2013 09:08 PM
One other thought, which may be off track, but since I can't see the command-line data that Sean has mentioned, I'm just guessing, is that you might want to check the permissions on the reducer.py script. In order for it to accept the pipe and execute the sorted data as input, it must be executable. You can assure it is executable by issuing a "chmod 755 reducer.py" on the file.
08-01-2013 09:22 PM
I renamed my mapper and reducer to jpm.py and jpr.py to make sure my spelling is right. The reducer part of the "cat" doesn't work unless it's preceeded by "python". Then it completes successfully.
In hadoop map-reduce, from the command line, I've gotten the process to complete, but it yields no results. I reduced the reducer functionality to just pass on what comes from the mapper. It completes, but doesn't yield any results in the output (file size = 0). I removed the reducer completely and I get what I expect from the mapper.
I'd like to progress to the gui's and get a taste of pig and hive in cloudera by the end of the month. I think I'm going to try all over again with a fresh vm.
08-01-2013 09:30 PM
thanks - I did this through the properties screen of the file browser, but I tried it again with the command you supplied. still no luck - the process completes, but outputs nothing, even with a plain vanilla reducer (echoing the mapper output).
08-01-2013 09:52 PM
Odd. I take it you're doing something in your reducer that's smart about reading the "standard input" that's being piped to it? Something like:
for line in sys.stdin:
Also, as Sean indicated, if we could get pastes of your source code and also the actual command-line output/errors you are seeing, that would round out the picture for us.
08-06-2013 05:36 PM
It took me a while to figure out. I just got it a minute ago.
I was running scripts that I developed in Windows (where end-of-line = cr+lf). I needed to strip out the "cr" so the python interpreter in Linux wouldn't be looking for /usr/bin/env python/r, but /usr/bin/env python.
Now I can move on.