Created 05-13-2016 09:06 AM
Hi Team
Can you help to under Stand HIVE best practices on Horton works HDP 2.3, to support better
HIVE Best Practices
Created 05-13-2016 10:54 AM
There is no max join. By now Hive has a good cost based optimizer with statistics. So as long as you properly run statistics on the table you can have complex queries as well. However denormalized tables are cheaper ( storage is cheap ) so they make more sense than in traditional databases. But as sourygna said very general question.
As in any database Integer keys are the best. Strings work but may require more memory. If you use floats you get what you deserve :-).
http://www.slideshare.net/BenjaminLeonhardi/hive-loading-data
Better if you don't do it. ORC files are optimized for each datatype so using strings and cast them on demand will slow performance. For delimited files much less important.
See 4. Yes as long as you use ORC.
Denormalization?
Not sure I understand the question. If you use ORC you have per default 256MB blocks which have 64MB stripes. Good default. But if you want more map tasks you can reduce the block size.
Very generic question.
Very generic question. Look at the presentation I linked for details on Predicate pushdown. Sort your data properly during insert.
When the small table fits easily into memory of a map task?
https://cwiki.apache.org/confluence/display/Hive/Skewed+Join+Optimization has details on when its good
Seriously you should read the hive confluence page. In general I would trust the CBO.
Problems I have seen where WAY too many partitions and small files in each partition. Too many splits result in problems. So you should make sure to properly load data into hive ( see my presentation) . Make sure the file sizes in your hive tables are proper. Also keep an eye out for reducer and mapper numbers to make sure they are in healthy range. If they aren't there is no fixed rule on why.
Less joins but more data space.
As Sourygna said, these are some veeery generic questions. You might have to drill down a bit into what you actually concretely want.
Created 05-13-2016 09:42 AM
Those are a lot of (broad) questions!
I would recommend you in the first place to look at the "Hive performance tuning" documentation on our website: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.2/bk_performance_tuning/content/ch_hive_archit...
I guess you could also find some answers on this forum https://community.hortonworks.com/topics/Hive.html
However, due to the number of questions you have, I would recommend you to contact Hortonworks's professional services to have a consultant help you on your specific implementation (there is no "universal holy grail" tuning, at the end configurations and queries are optimised for specific use cases).
Created 05-13-2016 09:46 AM
Thanks for your information, Alt east can you tell me the advantage of SKEW joins and where to use ? and Instead of using multiple joins what is the best way to run the qurey
Created 05-13-2016 10:54 AM
There is no max join. By now Hive has a good cost based optimizer with statistics. So as long as you properly run statistics on the table you can have complex queries as well. However denormalized tables are cheaper ( storage is cheap ) so they make more sense than in traditional databases. But as sourygna said very general question.
As in any database Integer keys are the best. Strings work but may require more memory. If you use floats you get what you deserve :-).
http://www.slideshare.net/BenjaminLeonhardi/hive-loading-data
Better if you don't do it. ORC files are optimized for each datatype so using strings and cast them on demand will slow performance. For delimited files much less important.
See 4. Yes as long as you use ORC.
Denormalization?
Not sure I understand the question. If you use ORC you have per default 256MB blocks which have 64MB stripes. Good default. But if you want more map tasks you can reduce the block size.
Very generic question.
Very generic question. Look at the presentation I linked for details on Predicate pushdown. Sort your data properly during insert.
When the small table fits easily into memory of a map task?
https://cwiki.apache.org/confluence/display/Hive/Skewed+Join+Optimization has details on when its good
Seriously you should read the hive confluence page. In general I would trust the CBO.
Problems I have seen where WAY too many partitions and small files in each partition. Too many splits result in problems. So you should make sure to properly load data into hive ( see my presentation) . Make sure the file sizes in your hive tables are proper. Also keep an eye out for reducer and mapper numbers to make sure they are in healthy range. If they aren't there is no fixed rule on why.
Less joins but more data space.
As Sourygna said, these are some veeery generic questions. You might have to drill down a bit into what you actually concretely want.