12-03-2015 02:26 PM
The answer is "yes" (hat tip to John Russell) because HDFS is capable of locating data blocks on any data node, even with a replication factor of 1.
However, you need to be careful because if you're too fine-grained about distributing your partitions/Parquet files across the cluster, performance can suffer. Performance will be better and more predicatable with fewer blocks for your query to find.
12-03-2015 07:49 PM
Thanks a lot for the reply.
Is there some argument/parameter I can specify with create table in impala to ensure HDFS distributes data blocks across multiple data nodes? If not, how do I do this?
p.s. just getting started with hdfs/impala/hadoop/kudu..
12-10-2015 05:31 AM
Sorry for late response.
I am not looking at any particular use case. Just trying to see how the impala query is executed if the data is distributed across multiple hdfs data nodes. Its an experimental setup, so performance currently is irrelevant, its more for getting in-depth understanding.
In the query execution plan I want to observe SCAN_HDFS and AGGREGATION.
12-29-2015 03:34 PM
Impala does not have control of the physical locations of the HDFS blocks underlying Impala tables.
The tables in Impala are backed by files on HDFS and those files are chopped into blocks and distributed according to your HDFS configuration, but for all practical purposes the blocks are distributed round-robin among the data nodes (grossly simplified). Impala queries typically run on all data nodes that store data relevant to answering a parcitular query, so given a fixed amount of data, you can indirectly control Impala's degree of (inter-node) parallelism by changing the HDFS block size. More blocks == more parallelism.
If you are interested in learning about Impala, you may also find the CIDR paper useful: