We did not tune the following parameters. However they are not at the default value as per the log
dfs.datanode.du.reserved =
log -
Value is less than the recommended default of 6813825536 (Reserved space in bytes per volume. Always leave this much space free for non dfs use)
dfs.datanode.max.transfer.threads =
1024 ...
HAWQ
requires this property to be set to the recommended value of 40960 (Specifies
the maximum number of threads to use for transferring data in and out of the
datanode) |
mapreduce.map.java.opts =
"=-Xmx410m" ..
Value is less than the recommended default of -Xmx3276m (Larger heap-size for child jvms of maps.)
mapreduce.map.memory.mb
mapreduce.reduce.java.opts
mapreduce.reduce.memory.mb
mapreduce.task.io.sort.mb
yarn.app.mapreduce.am.resource.mb
tez.am.resource.memory.mb
tez.runtime.io.sort.mb
tez.runtime.unordered.output.buffer.size-mb
tez.task.resource.memory.mb
hive.auto.convert.join.noconditionaltask.size
hive.tez.container.size
output.replace-datanode-on-failure
Value is less than the recommended default of 6813825536 (Reserved space in bytes per volume. Always leave this much space free for non dfs use) . Why was the setting not set to the default on install.