Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

How to process a word count on zipped files in spark

avatar
Rising Star

I am working on a aws dataset(email dataset -enron) . I just wanted to do a word count on all of the emails and find out the average. The files are zipped (Please see the screen shot attachment which shows how the actual data set looks like). Please if some one could help me by looking at the scrscreen-shot-2016-10-07-at-090457.pngeen shot that how I can do the word count processing using spark (scala preferably). I would really appreciate .

Note: The actual datasize is 210 GB. I am planning to run an EMR cluster then perform the processing.

1 ACCEPTED SOLUTION
1 REPLY 1