We are using Cloudera Express 5.4.8.
Latetly we are facing an issue with passing heap size argguments for the map and reduce tasks, and Isuspect that it's related to client overide configuration.
We are running our mapreduce job with "hadoop jar" and passing JVM arguments on the command:
However, it seems that these are not passed to the child JVMs, and instead it uses the deafult java heap size.
I think the reason for this is the "Map Task Maximum Heap Size (Client Override)" and "Reduce Task Maximum Heap Size (Client Override)". I'm trying to put 0 into those, but then I get an error on Cloudera Manager web UI "0 GiB is less than the minimum allowed value 32 MiB" and the JT fails to start. How do I disable this configuration?
Any directions will be greatly appreciated.
I think I made some progress here.
Vieweing mapred-site.xml of one of the tt (fromCloudera Manager > Cluster > HDFS > Instances > tt01 > Processes see here), I see entries the java.opts entries has a <final>true</final> (see snippet below).
I can't seem to get rid of those through Cloudera manager.
Any help will be very much appriciated.
<property> <name>mapred.child.java.opts</name> <value></value> <final>true</final> </property> <property> <name>mapred.map.child.java.opts</name> <value> -Xmx2147483648</value> <final>true</final> </property> <property> <name>mapred.reduce.child.java.opts</name> <value> -Xmx3221225472</value> <final>true</final> </property>
you might try these options:
Just change the map to reduce for the same options.