Member since
04-02-2018
3
Posts
0
Kudos Received
0
Solutions
11-11-2018
10:05 PM
You may need to increase: yarn.nodemanager.resource.memory-mb yarn.scheduler.maximum-allocation-mb the default one could be too small to launch a default spark executor container ( 1024MB + 512 overhead). You may also want to enable INFO logging for the spark shell to understand what exact error/warn it has: /etc/spark/conf/log4j.properties
... View more
04-05-2018
02:15 AM
Hi Jaimie, Just did a little google and found below things to be set. This may resolve the issue: set hive.tez.container.size=2048;
set hive.tez.java.opts=-Xmx1700m; --set this 80% of hive.tez.container.size Please give it a try if you feel confident on this. Thanks snm1502
... View more