Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Best way to analyze and transform big data in Hadoop

avatar
Rising Star
I'm planning analyse some data using Hadoop. I've 200 text files to analyze. I'm thinking:
  • Using Spark to load data into HDFS (are PIG or Sqoop better?)
  • Create the structure in Hive, creating the tables (basically this first data model will have 200 tables, each table will be a text file)
  • Load data into Hive (all the files)
  • Do some data cleansing with Spark (I will need to put Spark reading from the Hive) and try to reduce the amount of data
  • Create the new data model in Hive (now with a smaller amount of data after the cleansing in previous step)
  • Use a Analytical Tool (like SAS, Tableau, etc.) to do some analytical operations (in this tool I will put the all the data returned in previous step)

I believe that this will not be the best way to analyze big data . My goal is in the end of the process in Hadoop have a smaller data set in order to successfully integrate in SAS , for example. What is your opinion ? Many thanks!

1 ACCEPTED SOLUTION

avatar
Master Guru

I see different issues here:

a) Split up a line with multiple records

If you have multiple communications by line you will need some pre processing. Hive provides maps and arrays but it is hard to use them in normal SQL.

So tons of different ways but my suggestion would be to write a Pig UDF to split up a line into multiples potentially adding a column that adds the line information if you need to group them together somehow.

http://stackoverflow.com/questions/11287362/splitting-a-tuple-into-multiple-tuples-in-pig

b) Get date from Filename

There are some ways to get at the filename in mapreduce but its difficult. MapReduce by definition abstracts filenames away. You have two options there:

1) Use a little python/java/shell whatever preprocessing script OUTSIDE hadoop that adds a field with the date to each row of each file taken from the filename. Easy but not that scalable

2) Write your own recordreader

3) Pig seems to provide some value called tagsource that can do the same

http://stackoverflow.com/questions/9751480/how-can-i-incorporate-the-current-input-filename-into-my-...

c) Do Graph analysis

You can use Hive/pig/Spark for preprocessing and Spark provides a cool graph api. Tons of examples out there.

http://spark.apache.org/graphx/

Good luck.

View solution in original post

4 REPLIES 4

avatar
Contributor

Can you say a little bit more about the text files? Are they all the same kind of data and format, or different? How big are the text files in terms of GB and number of rows/columns?

avatar
Rising Star
Hi Paul, thnaks for your attention. My goal is do some Social Analysis (find patterns, etc.) that's why I want SAS too. The subject is the relationships between a company. I've the emails, telephones, etc. What I've: 5 months data collection (Aug, Set, Oct, Nov and Dec)
  • Each text file correspond to a day
  • Each type of communication have an specific ID (imagine, email I've ID 1, Phone ID 2, etc.)
  • Each line corresponds to an aggregation of multiple communications (separated by the department and every 30 minutes)
  • The attributes are:
    • Communication ID
    • Time
    • Department
    • Email Code
    • Phone Code
    • Phone Duration

One possible line of the text file would be: 1 10:30:87 3 12 1 10:30:22 1 10:45:21 3 12 2 10:30:22 2 12 2 10:30:22 1 12 10:30:22 So as you can see, I can have multiple Communication ID by line (that's one of my doubts to create the Hive tables). The size of the text files are 6GB. Many thanks for your help Paul 🙂 Hope you can understand the problem. Thanks!

avatar
Master Guru

I see different issues here:

a) Split up a line with multiple records

If you have multiple communications by line you will need some pre processing. Hive provides maps and arrays but it is hard to use them in normal SQL.

So tons of different ways but my suggestion would be to write a Pig UDF to split up a line into multiples potentially adding a column that adds the line information if you need to group them together somehow.

http://stackoverflow.com/questions/11287362/splitting-a-tuple-into-multiple-tuples-in-pig

b) Get date from Filename

There are some ways to get at the filename in mapreduce but its difficult. MapReduce by definition abstracts filenames away. You have two options there:

1) Use a little python/java/shell whatever preprocessing script OUTSIDE hadoop that adds a field with the date to each row of each file taken from the filename. Easy but not that scalable

2) Write your own recordreader

3) Pig seems to provide some value called tagsource that can do the same

http://stackoverflow.com/questions/9751480/how-can-i-incorporate-the-current-input-filename-into-my-...

c) Do Graph analysis

You can use Hive/pig/Spark for preprocessing and Spark provides a cool graph api. Tons of examples out there.

http://spark.apache.org/graphx/

Good luck.

avatar
Master Guru

NiFi will do that very easily, then you can trigger some Spark jobs to do final processing.