- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Which storage format is optimum for training machine learning models and running iterative processes?
- Labels:
-
Apache Hive
-
Apache Spark
Created 08-13-2018 02:15 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Assuming a data pipeline will be loading hive tables as spark dataframes. Which storage format is optimum for training machine learning models and running iterative processes? Row based (text, Avro) or column based (Orc, Parquet) files?
Created 08-13-2018 02:20 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
ORC and Parquet are optimized for OLAP queries since only a subset of the columns from the source tables are used. Avro and other row based perform better if you have to look at entire record. Hav from one datatype to another (multi-hive table approach) is a common practice to determine which format performs the best for your use case. Performance test all three types is my recommendation. There is no one size fits all.
Created 08-13-2018 02:20 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
ORC and Parquet are optimized for OLAP queries since only a subset of the columns from the source tables are used. Avro and other row based perform better if you have to look at entire record. Hav from one datatype to another (multi-hive table approach) is a common practice to determine which format performs the best for your use case. Performance test all three types is my recommendation. There is no one size fits all.
