04-20-2017 02:41 AM
04-20-2017 06:47 AM
Can you translate your issue in english? also if it is not related to Java heap space, I would recommend you to create a new thread instead so that it is easy to track and others to contribute as well
04-21-2017 12:29 AM
thanks for replying
i updated this file directly, instead via Cloudera manager, and i resolve my problem now :) thank you so much , but i have another question i Iam running cloudera with default configuration with one-node cluster, and would like to find where HDFS stores files locally.i create a file in hdfs with hue but when i see /dfs/nn it's empty i can't find the file that i have already created
04-21-2017 07:14 AM
The default path is /opt/hadoop/dfs/nn
You can confirm this by Cloudera manager -> HDFS -> Configuration -> search for "dfs.namenode.name.dir"
04-21-2017 07:24 AM
the path /opt/hadoop/dfs/nn does not exist ,
and when i look for the file that i already created i can't find it in the path
04-21-2017 07:28 AM
As mentioned already, please create a new topic for new issue as it may mislead others
Also please check the full answer and reply, so that you will get desired answer
09-26-2017 06:42 AM
The last reducer of my mapreduce job fails with the below error.
2017-09-20 16:23:23,732 FATAL [main] org.apache.hadoop.mapred.YarnChild: Error running child : java.lang.OutOfMemoryError: GC overhead limit exceeded at java.util.regex.Matcher.<init>(Matcher.java:224) at java.util.regex.Pattern.matcher(Pattern.java:1088) at java.lang.String.replaceAll(String.java:2162) at com.sas.ci.acs.extract.CXAService$myReduce.parseEvent(CXAService.java:1612) at com.sas.ci.acs.extract.CXAService$myReduce.reduce(CXAService.java:919) at com.sas.ci.acs.extract.CXAService$myReduce.reduce(CXAService.java:237) at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:444) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) 2017-09-20 16:23:23,834 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping ReduceTask metrics system... 2017-09-20 16:23:23,834 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: ReduceTask metrics system stopped. 2017-09-20 16:23:23,834 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: ReduceTask metrics system shutdown complete.
Current settings:
mapreduce.map.java.opts | -Djava.net.preferIPv4Stack=true -Xmx3865051136 |
mapreduce.reduce.java.opts | -Djava.net.preferIPv4Stack=true -Xmx6144067296 |
1) do you recommend increasing the following properties to the below values ?
"mapreduce.map.java.opts","-Xmx4g"
"mapreduce.reduce.java.opts","-Xmx8g"
2) These are my map and reduce memory current settings. Do i also need to bump up my reduce memory to 10240m ?
mapreduce.reduce.memory.mb 8192
mapreduce.reduce.memory.mb 8192
09-26-2017 12:07 PM
I will not recommend to change your settings, instead you can pass the memory & java Opt when you execute your Jar.
Ex: Below are some sample value, you can change it as needed.
hadoop jar ${JAR_PATH} ${CONFIG_PATH}/filename.xml ${ENV} ${ODATE} mapMem=12288 mapJavaOpts=Xmx9830 redurMem=12288 redurJavaOpts=Xmx9830
Note:
mapJavaopts = mapMem * 0.8
redurJavaOpts = redurMem * 0.8
10-05-2017 12:31 PM
What are the implications for increasing mapreduce/reduce.memory.mb and mapreduce.reduce.java.opts to a higher value in the cluster itself ?
One of them would be that jobs that do not need this additional memory will get it. which is of no use
Other jobs during that time may be impacted
Anything else ?
10-06-2017 09:11 AM
To add on to your point, the cluster setup is applicable to all the mapreduce job, so it may impact other non-mapreduce jobs.
In fact I am not against setup higher value in cluster itself, but you can do that based on how many jobs requires higher values and performance, etc