I have a hadoop cluster in google cloud but in recent days I have suffered with slow queries, using beeline and hive .. beeline internally and hive externally, I have below some screens that show me errors. I would like to know how to edit the memory allocated for java. Today is -Xmx512m -Xms512 ... and wanted to switch to -Xmx1024m -Xms1024 to validate if this can bring me more performance!
@Hugo Cosme editing the JVM memory for a yarn container or app is likely done by editing your Yarn/MapReduce settings via Ambari to higher than 512mb. Similarly various HiveServer2 memory settings can be edited as well.
If you are trying to get Hive Performance (original post subject) I would recommend Hive LLAP if you are not already using it. You can search the HCC for many posts about LLAP configuration and setup.
If it is helpful here are some of my Settings For Hive, Yarn, Map Reduce:
Metastore Heap Size: 8gb
Client heap Size: 4gb
Map/Join per Map Memory Threshold: 4 gb
Data Per Reducer: 4gb
Tez container Size: 12gb
Yarn Memory: 52gb
Max Container: 45056MB
Map Memory: 12 gb
Reduce Memory: 22gb
If this answer is helpful, please choose Accept to mark the question answered.
If this answer resolves your issue or allows you to move forward, please choose to ACCEPT this solution and close this topic. If you have further dialogue on this topic please comment here or feel free to private message me. If you have new questions related to your Use Case please create separate topic and feel free to tag me in your post.