Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Which storage format is optimum for training machine learning models and running iterative processes?

Solved Go to solution

Which storage format is optimum for training machine learning models and running iterative processes?

New Contributor

Assuming a data pipeline will be loading hive tables as spark dataframes. Which storage format is optimum for training machine learning models and running iterative processes? Row based (text, Avro) or column based (Orc, Parquet) files?

1 ACCEPTED SOLUTION

Accepted Solutions
Highlighted

Re: Which storage format is optimum for training machine learning models and running iterative processes?

Super Guru

ORC and Parquet are optimized for OLAP queries since only a subset of the columns from the source tables are used. Avro and other row based perform better if you have to look at entire record. Hav from one datatype to another (multi-hive table approach) is a common practice to determine which format performs the best for your use case. Performance test all three types is my recommendation. There is no one size fits all.

View solution in original post

1 REPLY 1
Highlighted

Re: Which storage format is optimum for training machine learning models and running iterative processes?

Super Guru

ORC and Parquet are optimized for OLAP queries since only a subset of the columns from the source tables are used. Avro and other row based perform better if you have to look at entire record. Hav from one datatype to another (multi-hive table approach) is a common practice to determine which format performs the best for your use case. Performance test all three types is my recommendation. There is no one size fits all.

View solution in original post

Don't have an account?
Coming from Hortonworks? Activate your account here