Member since
08-08-2013
35
Posts
4
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
10827 | 09-17-2014 10:12 AM | |
9596 | 08-12-2014 11:38 AM | |
3689 | 04-03-2014 10:44 AM | |
11040 | 03-19-2014 02:18 PM |
12-25-2017
10:54 AM
sudo -u hdfs hadoop jar /usr/lib/hadoop-mapreduce/hadoop-streaming-2.6.0-cdh5.8.0.jar -input /user/root/in/purchases.txt -output /foo2 -mapper 'python mapper.py' -file mapper.py -numReduceTasks 1 -reducer 'python reducer.py' -file reducer.py
... View more
03-07-2016
10:58 AM
1 Kudo
I ran into this error, and it was caused by running out of heap size for Nodemanager. I increased the heap, and Yarn came up without errors.
... View more
11-05-2014
06:29 AM
i have got resolve this issue, true be told, about hdfs, yarn or mapred, i know it's kept from submitting jobs in default, but i think you also know, min.user.id and allow user list are for this case, so the issue is not about user or job. i have montiored many times, when just 1 container start, it's dead automaticlly after secs, but when the normal state, basicly it will invoke 3-4 containers in my env. so i can sure this issue is about cotainer can't work normal. but why ? as i said it's just one container has been start normally, so i can check this container log, but can't find nothing, the errors like what i have shown in the above. and i also explain when the sqoop execute normally, it will create a directory in the usercache directory, but when sqoop job failed, it won't, so i guess maybe this directory has some problems, but of course, i don't know the exact reason. then i delete namenode HA, just leave one namenode and one secondary namenode as default, then start sqoop again, it's failed too, but at this time, the log is more readable, "NOT INITALIZE CONTAINER" error show to me. this logs make me more confidential, it's really because job can't invoke container. at last, i stop all the cluster, delete /yarn/* in datanode and namenode, then start all cluster, it works fine now. currently, i still don;t know why hdfs or yarn can't invoke container, but the problem has been resolved.
... View more
09-17-2014
11:25 AM
You pointed out the problem and I removed the -Xmx825955249 from where I had entered it in Cloudera Manager. I was using the wrong field to update the value. Thank you so much for sticking with me and helping me resolve this issue! The jobs now succeed! Kevin Verhoeven
... View more
08-18-2014
03:29 PM
We are still experiencing periodic problems with applications hanging when a number of jobs are submitted in parallel. We have reduced 'maxRunningApps', increased the virtual core count, and also increased 'oozie.service.callablequeueservice.threads' to 40. In many cases, the applications do not hang, however this is not consistent. Regarding YARN issue number 1913 (https://issues.apache.org/jira/browse/YARN-1913), is this patch incorporated in CDH 5.1.0, the version we are using? YARN-1913 indicates the affected version is 2.3.0, and is fixed in 2.5.0. Our Hadoop version in 5.1.0 is 2.3.0. Thank you, Michael Reynolds
... View more
04-03-2014
10:44 AM
1 Kudo
Can you go into CM, and add "/etc/hadoop/conf" as the *very first* entry in yarn.application.classpath? (You'll need to click on the plus sign an move $HADOOP_CONF_DIR to the next slot. YARN restart is required.)
... View more
03-31-2014
01:14 AM
I have set "yarn.nodemanager.delete.debug-delay-sec" to 6000,and the container log dir is : <property> <name>yarn.nodemanager.log-dirs</name> <value>/hadoop/hadoop-2.0.0-cdh4.5.0/yarn/containers</value> </property> <property> <description>Where to aggregate logs</description> <name>yarn.nodemanager.remote-app-log-dir</name> <value>/var/log/hadoop-yarn/app</value> </property> The dir /hadoop/hadoop-2.0.0-cdh4.5.0/yarn/containers has nothing after running the task,I found the configruation in yarn-site.xml never take effect.
... View more