Not sure I posted this in the roght place so posting again here.
We are using Cloudera Express 5.4.8.
Latetly we are facing an issue with passing heap size argguments for the map and reduce tasks, and Isuspect that it's related to client overide configuration.
We are running our mapreduce job with "hadoop jar" and passing JVM arguments on the command:
However, it seems that these are not passed to the child JVMs, and instead it uses the deafult java heap size.
I think the reason for this is the "Map Task Maximum Heap Size (Client Override)" and "Reduce Task Maximum Heap Size (Client Override)". I'm trying to put 0 into those, but then I get an error on Cloudera Manager web UI "0 GiB is less than the minimum allowed value 32 MiB" and the JT fails to start. How do I disable this configuration?
Any directions will be greatly appreciated.
Thanks for the reply.
I tried these, but it sets the value to 0, validation error is thrown and the save button is disabled. See below.
This is the message beside the "Save" button:
Please fix the validation error(s) to enable save
This is the message by the field (Map Task Maximum Heap Size (Client Override)):
Meanwhile, we ran a few tests on our lab cluster. And found the "Final" entries in mapred-site.xml so that's what we did:
1. Logged into the embeded Postgres
2. Searched for entires with "Client Overide":