Archives of Support Questions (Read Only)

This is an archived board for historical reference. Information and links may no longer be available or relevant
Announcements
This board is archived and read-only for historical reference. To ask a new question, please post a new topic on the appropriate active board.

How to process a word count on zipped files in spark

avatar
Rising Star

I am working on a aws dataset(email dataset -enron) . I just wanted to do a word count on all of the emails and find out the average. The files are zipped (Please see the screen shot attachment which shows how the actual data set looks like). Please if some one could help me by looking at the scrscreen-shot-2016-10-07-at-090457.pngeen shot that how I can do the word count processing using spark (scala preferably). I would really appreciate .

Note: The actual datasize is 210 GB. I am planning to run an EMR cluster then perform the processing.

1 ACCEPTED SOLUTION
1 REPLY 1