Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Maximum hive.exec.max.dynamic.partitions allowed & recommended

avatar
Contributor

Hi Team, any suggestions. what will be the impact of adding this setting hive.exec.max.dynamic.partitions to whitelist. which allows any user to create any number of partitions for a table at run time.

1 ACCEPTED SOLUTION

avatar
@Dhiraj

There is no maximum as per my knowledge and again this value depends on the back-end metastore database what you are using.

I have tested up 500,000 in production with oracle as back-end.

hive.exec.max.dynamic.partitions=500000

There won't be any impact by adding that to whitelist but always suggested to have number so that it won't impact cluster in long term.

Example: One user is keeps on increasing partitions, where each partition is very small file, in this case it increase the Namenode metadata which proportionally my effect cluster.

View solution in original post

2 REPLIES 2

avatar
@Dhiraj

There is no maximum as per my knowledge and again this value depends on the back-end metastore database what you are using.

I have tested up 500,000 in production with oracle as back-end.

hive.exec.max.dynamic.partitions=500000

There won't be any impact by adding that to whitelist but always suggested to have number so that it won't impact cluster in long term.

Example: One user is keeps on increasing partitions, where each partition is very small file, in this case it increase the Namenode metadata which proportionally my effect cluster.

avatar
Contributor

@Sridhar Reddy

We are using oracle as back-end, we have plan to implement it in production in near future. As you said small files will increase metadata which proportionally effects cluster. So, are there any precautions or suggestion to be followed while implementing or after implementation.