we are doing a PoC with Kudu and Impala. For testing purposes we are using as well Spark to read Parquet files from the local disk which is pretty easy:
val df_parquet1 = spark.read.format("parquet"). load("file:///work/testParquetGZ") df_parquet1.createOrReplaceTempView("test_parquet1")
and then we are able to query it directly within spark:
%sql select * from test_parquet1 limit 100
I'm looking for a similar approach for Impala. Is it really a must that I have to load the Parquet files to a HDFS storage? Because in our case this makes no sense, we use mainly kudu, so the HDFS part is only there to get Impala running. Our idea is to store the Parquet on a big file share, but without HDFS, as it would generate additional overhead.
So my question, how can I access Parquet files with Impala from (local) disk without HDFS?