Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Which storage format is optimum for training machine learning models and running iterative processes?

avatar
New Contributor

Assuming a data pipeline will be loading hive tables as spark dataframes. Which storage format is optimum for training machine learning models and running iterative processes? Row based (text, Avro) or column based (Orc, Parquet) files?

1 ACCEPTED SOLUTION

avatar
Master Guru

ORC and Parquet are optimized for OLAP queries since only a subset of the columns from the source tables are used. Avro and other row based perform better if you have to look at entire record. Hav from one datatype to another (multi-hive table approach) is a common practice to determine which format performs the best for your use case. Performance test all three types is my recommendation. There is no one size fits all.

View solution in original post

1 REPLY 1

avatar
Master Guru

ORC and Parquet are optimized for OLAP queries since only a subset of the columns from the source tables are used. Avro and other row based perform better if you have to look at entire record. Hav from one datatype to another (multi-hive table approach) is a common practice to determine which format performs the best for your use case. Performance test all three types is my recommendation. There is no one size fits all.