Reply
Explorer
Posts: 21
Registered: ‎10-07-2015
Accepted Solution

How to distribute impala table partitions

Is there a way to distribute impala table partitions onto multiple hdfs data nodes without replication?

 

Regards,

Bhaskar

Posts: 354
Topics: 162
Kudos: 60
Solutions: 27
Registered: ‎06-26-2013

Re: How to distribute impala table partitions

Bhaskar,

 

The answer is "yes" (hat tip to John Russell) because HDFS is capable of locating data blocks on any data node, even with a replication factor of 1.

 

However, you need to be careful because if you're too fine-grained about distributing your partitions/Parquet files across the cluster, performance can suffer. Performance will be better and more predicatable with fewer blocks for your query to find.

Explorer
Posts: 21
Registered: ‎10-07-2015

Re: How to distribute impala table partitions

Thanks a lot for the reply.

 

Is there some argument/parameter I can specify with create table in impala to ensure HDFS distributes data blocks across multiple data nodes? If not, how do I do this?

 

Regards,

Bhaskar

p.s. just getting started with hdfs/impala/hadoop/kudu..

Posts: 354
Topics: 162
Kudos: 60
Solutions: 27
Registered: ‎06-26-2013

Re: How to distribute impala table partitions

It may help if you describe what your use case is here/your goal with this operation. There may be several ways to reach that goal.

Explorer
Posts: 21
Registered: ‎10-07-2015

Re: How to distribute impala table partitions

Sorry for late response.

 

I am not looking at any particular use case. Just trying to see how the impala query is executed if the data is distributed across multiple hdfs data nodes. Its an experimental setup, so performance currently is irrelevant, its more for getting in-depth understanding.

 

In the query execution plan I want to observe SCAN_HDFS and AGGREGATION.

 

Regards,

Bhaskar

Highlighted
Cloudera Employee
Posts: 307
Registered: ‎10-16-2013

Re: How to distribute impala table partitions

Impala does not have control of the physical locations of the HDFS blocks underlying Impala tables.

The tables in Impala are backed by files on HDFS and those files are chopped into blocks and distributed according to your HDFS configuration, but for all practical purposes the blocks are distributed round-robin among the data nodes (grossly simplified). Impala queries typically run on all data nodes that store data relevant to answering a parcitular query, so given a fixed amount of data, you can indirectly control Impala's degree of (inter-node) parallelism by changing the HDFS block size. More blocks == more parallelism.

If you are interested in learning about Impala, you may also find the CIDR paper useful:
http://www.cidrdb.org/cidr2015/Papers/CIDR15_Paper28.pdf

Announcements