If you say that increasing the heap doesn't help are we talking about decent sizes like 8GB+? Also did you increase the java opts AND the container size?
set hive.tez.container.size =
If yes then you most likely have a different problem like for example loading data into a partitioned table. ORC writers keep one buffer open for every output file. So if you load badly to a partitioned table they will keep a lot of memory open. There are ways around it like optimized sorted load or the distribute by keyword.
If however you use significantly less than 4-8GB for the task then you should increase that.