Hi,
I am working with Spark 1.6.1 and hive 1.2.0.
I have Hive managed Table stored in Orc format.
I am trying to query this table and make sure predicates are pushed down,
Option#1) using Spark Sql, and I am using Spark Thrift server
when i look at the explain plan, I always see HiveTableScan. and I believe this is reading entire data. How can i make sure predicates are pushed down.
Option#2) using Hive, and I am using Hive Thrift server
and here all the time, I see a Map Reduce Job being triggered. how can i validate predicate pushdown is happening.
I want to make sure data is read in an optimized way.
thanks for your help
Regards