Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here. Want to know more about what has changed? Check out the Community News blog.

How to distribute impala table partitions

SOLVED Go to solution

How to distribute impala table partitions

Explorer

Is there a way to distribute impala table partitions onto multiple hdfs data nodes without replication?

 

Regards,

Bhaskar

1 ACCEPTED SOLUTION

Accepted Solutions
Highlighted

Re: How to distribute impala table partitions

Master Collaborator

Impala does not have control of the physical locations of the HDFS blocks underlying Impala tables.

The tables in Impala are backed by files on HDFS and those files are chopped into blocks and distributed according to your HDFS configuration, but for all practical purposes the blocks are distributed round-robin among the data nodes (grossly simplified). Impala queries typically run on all data nodes that store data relevant to answering a parcitular query, so given a fixed amount of data, you can indirectly control Impala's degree of (inter-node) parallelism by changing the HDFS block size. More blocks == more parallelism.

If you are interested in learning about Impala, you may also find the CIDR paper useful:
http://www.cidrdb.org/cidr2015/Papers/CIDR15_Paper28.pdf

5 REPLIES 5

Re: How to distribute impala table partitions

Master Collaborator

Bhaskar,

 

The answer is "yes" (hat tip to John Russell) because HDFS is capable of locating data blocks on any data node, even with a replication factor of 1.

 

However, you need to be careful because if you're too fine-grained about distributing your partitions/Parquet files across the cluster, performance can suffer. Performance will be better and more predicatable with fewer blocks for your query to find.

Re: How to distribute impala table partitions

Explorer

Thanks a lot for the reply.

 

Is there some argument/parameter I can specify with create table in impala to ensure HDFS distributes data blocks across multiple data nodes? If not, how do I do this?

 

Regards,

Bhaskar

p.s. just getting started with hdfs/impala/hadoop/kudu..

Re: How to distribute impala table partitions

Master Collaborator

It may help if you describe what your use case is here/your goal with this operation. There may be several ways to reach that goal.

Re: How to distribute impala table partitions

Explorer

Sorry for late response.

 

I am not looking at any particular use case. Just trying to see how the impala query is executed if the data is distributed across multiple hdfs data nodes. Its an experimental setup, so performance currently is irrelevant, its more for getting in-depth understanding.

 

In the query execution plan I want to observe SCAN_HDFS and AGGREGATION.

 

Regards,

Bhaskar

Highlighted

Re: How to distribute impala table partitions

Master Collaborator

Impala does not have control of the physical locations of the HDFS blocks underlying Impala tables.

The tables in Impala are backed by files on HDFS and those files are chopped into blocks and distributed according to your HDFS configuration, but for all practical purposes the blocks are distributed round-robin among the data nodes (grossly simplified). Impala queries typically run on all data nodes that store data relevant to answering a parcitular query, so given a fixed amount of data, you can indirectly control Impala's degree of (inter-node) parallelism by changing the HDFS block size. More blocks == more parallelism.

If you are interested in learning about Impala, you may also find the CIDR paper useful:
http://www.cidrdb.org/cidr2015/Papers/CIDR15_Paper28.pdf