Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

HIVE Best Practice

avatar
Rising Star

Hi Team

Can you help to under Stand HIVE best practices on Horton works HDP 2.3, to support better

HIVE Best Practices

  • 1.What is the max joins that we can used in Hive for best performance ? what is the limitation of using joins ? what happen if we use multiple joins (will it affect performance or Job fail )?
  • 2.While Querying what kind of fields should be used for join keys?
  • 3.How will you make use of Partitioning and bucketing
  • 4.Criticality of type casting ? Converting the data types on fly over the queries ?
  • 5.Using multiple casting will it affect the HIVE job performance ?
  • 6.How to avoid using multiple Internal Joins, any alternative that we can use of avoiding multiple joins?
  • 7.What is the best way of doing splitting
  • 8.when to use left outer join and right outer join to avoid full table scan.
  • 9.What is best way to use select query instead of scanning full table
  • 10.Map join optimization ? when to use Map joins ?
  • 11.SKEW join optimization ? when to use SKEW joins?
  • 12.SMB join optimization? When to go SMP joins ?
  • 13.During huge data process what needs to do to prevent from job failures ? what is the best practices in that scenario ?
  • 14.Advantage of De-Normalization and where should I use on HIVE
1 ACCEPTED SOLUTION

avatar
Master Guru
  • 1.What is the max joins that we can used in Hive for best performance ? what is the limitation of using joins ? what happen if we use multiple joins (will it affect performance or Job fail )?

There is no max join. By now Hive has a good cost based optimizer with statistics. So as long as you properly run statistics on the table you can have complex queries as well. However denormalized tables are cheaper ( storage is cheap ) so they make more sense than in traditional databases. But as sourygna said very general question.

  • 2.While Querying what kind of fields should be used for join keys?

As in any database Integer keys are the best. Strings work but may require more memory. If you use floats you get what you deserve :-).

  • 3.How will you make use of Partitioning and bucketing

http://www.slideshare.net/BenjaminLeonhardi/hive-loading-data

  • 4.Criticality of type casting ? Converting the data types on fly over the queries ?

Better if you don't do it. ORC files are optimized for each datatype so using strings and cast them on demand will slow performance. For delimited files much less important.

  • 5.Using multiple casting will it affect the HIVE job performance ?

See 4. Yes as long as you use ORC.

  • 6.How to avoid using multiple Internal Joins, any alternative that we can use of avoiding multiple joins?

Denormalization?

  • 7.What is the best way of doing splitting

Not sure I understand the question. If you use ORC you have per default 256MB blocks which have 64MB stripes. Good default. But if you want more map tasks you can reduce the block size.

  • 8.when to use left outer join and right outer join to avoid full table scan.

Very generic question.

  • 9.What is best way to use select query instead of scanning full table

Very generic question. Look at the presentation I linked for details on Predicate pushdown. Sort your data properly during insert.

  • 10.Map join optimization ? when to use Map joins ?

When the small table fits easily into memory of a map task?

  • 11.SKEW join optimization ? when to use SKEW joins

https://cwiki.apache.org/confluence/display/Hive/Skewed+Join+Optimization has details on when its good

  • 12.SMB join optimization? When to go SMP joins ?

Seriously you should read the hive confluence page. In general I would trust the CBO.

  • 13.During huge data process what needs to do to prevent from job failures ? what is the best practices in that scenario ?

Problems I have seen where WAY too many partitions and small files in each partition. Too many splits result in problems. So you should make sure to properly load data into hive ( see my presentation) . Make sure the file sizes in your hive tables are proper. Also keep an eye out for reducer and mapper numbers to make sure they are in healthy range. If they aren't there is no fixed rule on why.

  • 14.Advantage of De-Normalization and where should I use on HIVE

Less joins but more data space.

As Sourygna said, these are some veeery generic questions. You might have to drill down a bit into what you actually concretely want.

View solution in original post

3 REPLIES 3

avatar
Super Collaborator

Those are a lot of (broad) questions!

I would recommend you in the first place to look at the "Hive performance tuning" documentation on our website: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.2/bk_performance_tuning/content/ch_hive_archit...

I guess you could also find some answers on this forum https://community.hortonworks.com/topics/Hive.html

However, due to the number of questions you have, I would recommend you to contact Hortonworks's professional services to have a consultant help you on your specific implementation (there is no "universal holy grail" tuning, at the end configurations and queries are optimised for specific use cases).

avatar
Rising Star

Thanks for your information, Alt east can you tell me the advantage of SKEW joins and where to use ? and Instead of using multiple joins what is the best way to run the qurey

avatar
Master Guru
  • 1.What is the max joins that we can used in Hive for best performance ? what is the limitation of using joins ? what happen if we use multiple joins (will it affect performance or Job fail )?

There is no max join. By now Hive has a good cost based optimizer with statistics. So as long as you properly run statistics on the table you can have complex queries as well. However denormalized tables are cheaper ( storage is cheap ) so they make more sense than in traditional databases. But as sourygna said very general question.

  • 2.While Querying what kind of fields should be used for join keys?

As in any database Integer keys are the best. Strings work but may require more memory. If you use floats you get what you deserve :-).

  • 3.How will you make use of Partitioning and bucketing

http://www.slideshare.net/BenjaminLeonhardi/hive-loading-data

  • 4.Criticality of type casting ? Converting the data types on fly over the queries ?

Better if you don't do it. ORC files are optimized for each datatype so using strings and cast them on demand will slow performance. For delimited files much less important.

  • 5.Using multiple casting will it affect the HIVE job performance ?

See 4. Yes as long as you use ORC.

  • 6.How to avoid using multiple Internal Joins, any alternative that we can use of avoiding multiple joins?

Denormalization?

  • 7.What is the best way of doing splitting

Not sure I understand the question. If you use ORC you have per default 256MB blocks which have 64MB stripes. Good default. But if you want more map tasks you can reduce the block size.

  • 8.when to use left outer join and right outer join to avoid full table scan.

Very generic question.

  • 9.What is best way to use select query instead of scanning full table

Very generic question. Look at the presentation I linked for details on Predicate pushdown. Sort your data properly during insert.

  • 10.Map join optimization ? when to use Map joins ?

When the small table fits easily into memory of a map task?

  • 11.SKEW join optimization ? when to use SKEW joins

https://cwiki.apache.org/confluence/display/Hive/Skewed+Join+Optimization has details on when its good

  • 12.SMB join optimization? When to go SMP joins ?

Seriously you should read the hive confluence page. In general I would trust the CBO.

  • 13.During huge data process what needs to do to prevent from job failures ? what is the best practices in that scenario ?

Problems I have seen where WAY too many partitions and small files in each partition. Too many splits result in problems. So you should make sure to properly load data into hive ( see my presentation) . Make sure the file sizes in your hive tables are proper. Also keep an eye out for reducer and mapper numbers to make sure they are in healthy range. If they aren't there is no fixed rule on why.

  • 14.Advantage of De-Normalization and where should I use on HIVE

Less joins but more data space.

As Sourygna said, these are some veeery generic questions. You might have to drill down a bit into what you actually concretely want.