Member since
01-16-2014
336
Posts
43
Kudos Received
31
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3393 | 12-20-2017 08:26 PM | |
3371 | 03-09-2017 03:47 PM | |
2841 | 11-18-2016 09:00 AM | |
5007 | 05-18-2016 08:29 PM | |
3850 | 02-29-2016 01:14 AM |
05-15-2015
12:52 AM
Please provide the log from the RM, showing the error and information before and after it. This works for me without an issue in all my test clusters I have. If there is private information in it send it through a private message please. Wilfred
... View more
05-13-2015
06:38 PM
1 Kudo
There have been a number of issues in the RM with relation to ZooKeeper connections. There is at least a couple of issue fixed in CDH 5.3.3 (YARN-3242, YARN-2992). I am not sure if your case is fully covered by these fixes since we are still working on one or two fixes in this area but upgrading to CDH 5.3.3 will help with a number of these ZK issues in the RM. Wilfred
... View more
05-13-2015
06:22 PM
2 Kudos
The comment on the setting in CM should have explained it for you: Maximum size in bytes for the Java Process heap memory. Passed to Java -Xmx. You can not set that in a configuration file since the JVM is started before the configuration file is read and you need to specify the heap size on startup. CM passes the value to the agent and the agent then creates the java cmd line. The only place they are stored is in CM. Under normal circumstances in a small cluster, lets say 10 nodes, 2 GB should be enough. In an average size cluster, 50 nodes, a RM should not need more than 4GB. In a large cluster,hundred or more NM's, you need to increase that. For a NM you can normally leave that at 1GB on small nodes or 2-4GB on large nodes. The number of containers you can run on a node is the size of the node. Wilfred
... View more
05-13-2015
06:05 PM
Have you included your hbase configuration on your path when you start the job? The hbase configuration is needed for the job to run. There are multiple ways that you can do this and we have a Knowledge Base article available for it if you are a subscription customer. Otherwise check out the standard documentation: CDH docs and HBASE doc In the HBASE doc check for the comments in the examples they all mention it. Wilfred
... View more
05-13-2015
05:33 PM
That symptoms sounds like YARN-3351 but the stack trace is not the same and that should have been fixed in the release you have also. Can you give a bit more detail on where you saw the issue? Is this logged in the RM, do youhave HA for the RM etc. I can not really tell from this short snippet what is happening but this is not a known issue. Wilfred
... View more
02-19-2015
10:33 PM
1 Kudo
We have made a configuration change in Cloudera Manager 5.2.1 which solves this issue. After upgrading the files will be moved to a different area which is not affected by the tmp cleaner. Wilfred
... View more
11-20-2014
05:09 PM
Miraculous, How have you setup your NM's: how much memeory and cores are avaialble on the nodes? This sounds like you do not have enough space to start the containers. Give the map and reduce conatiners 1GB and 1vcore to start with. Check the NM config values: yarn.nodemanager.resource.memory-mb and yarn.nodemanager.resource.cpu-vcores settings. Retrry it after that. Wilfred
... View more
11-20-2014
05:03 PM
Praveen, This does not loo like the NM recovery issue. For this case can you tell me when this happens? This sounds and looks like the agent trying to change the permissions during the distribution. The two files have special settings and as dlo said in his update it is most likely a non execute mount or directory permission issue. Wilfred
... View more
11-20-2014
04:59 PM
Hi Harsha, This is a known issue with NM and restart recovrey turned on. We are not 100% sure how and why it happens yet and are looking for as much data as we can. Before we fix this please make a copy of the whole directory and zip it up : tar czf yarn-recovery.tgz /tmp/hadoop-yarn After you have done that remove the directory and start it again. Can you also tell me how long the NM was up for and if you have a /tmp cleaner running on that host? Thank you, Wlfred
... View more
06-21-2014
05:19 AM
1 Kudo
No that is not possible. The date structure and the index number are a required part of the history path. It is hard coded in the job history server to create them and to use them while looking for the files. Wilfred
... View more
- « Previous
- Next »