I'm using a hadoop-spark cluster
I have errors when I'm trying to run Spark applications on Yarn the error says "spark context was shutdown"
When I increase the
Spark.driver.memory to 3 or 4Go no error but for my Hive requests on Hue I dont know the equivalent parameter to increase it so any results for all requests, they seems freeze.
I think there is a global parameter on Yarn side to fix all that but I don't know which it is.
Also I can't launch Hive on command line Ihive an error which says something like a problem with application master.
Could anyone help me please ?
Could you please share the Entire Error stacktrace so that it would give an idea on the full error and it would be useful for our analysis?