Member since
10-03-2016
9
Posts
2
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1894 | 02-15-2017 04:34 PM | |
6265 | 11-02-2016 01:34 PM |
02-16-2017
06:31 PM
It helped us troubleshoot the issue but we could not figure out why this is happening. We went back to reinstalling a new cluster on RHEL 7.2 and that is working fine. We still need to figure why RHEL 7.3 is causing this issue. Will post an update if we ever figure out.
... View more
02-15-2017
04:34 PM
Thanks that's helpful. We upgraded to RHEL7.3. Looks like this is not supported by HDP 2.5.3 yet.
... View more
02-14-2017
12:25 AM
Thanks. I am starting all the services by clicking on Start All in Ambari. I am facing the same issue with all the services. Yes, the environment variable is using the default and not overriden. e.g for timeline server drwxr-xr-x 2 yarn hadoop 40 Feb 13 18:45 yarn -rw-r--r-- 1 yarn hadoop 6 Feb 13 18:48 yarn--resourcemanager.pid -rw-r--r-- 1 yarn hadoop 5 Feb 13 18:47 yarn--timelineserver.pid $ ls yarn $
... View more
02-13-2017
06:06 PM
Hi, We are using HDP 2.5.0.0 on RHEL7. We had the cluster up and running for about 6 months now. We had to reboot the cluster couple of days back and noticed that the service status did not get correctly reflected on Ambari. If I do ps -ef on the box, I see that the process is running. Upon digging further I find that the $USER variable in the scripts is not getting resolved. As a result the pid files are created in the wrong directory with incorrect name. e.g /var/run/hadoop-mapreduce/mapred--historyserver.pid instead of /var/run/hadoop-mapreduce/mapred/mapred-mapred-history-server.pid. Any pointers in troubleshooting this? TIA resource_management.core.exceptions.Fail: Execution of 'ambari-sudo.sh su mapred -l -s /bin/bash -c 'ls /var/run/hadoop-mapreduce/mapred/mapred-mapred-historyserver.pid && ps -p `cat /var/run/hadoop-mapreduce/mapred/mapred-mapred-historyserver.pid`'' returned 2. ls: cannot access /var/run/hadoop-mapreduce/mapred/mapred-mapred-historyserver.pid: No such file or directory
... View more
Labels:
- Labels:
-
Apache Ambari
11-02-2016
01:34 PM
This issue was a result of misconfiguration of one of the nodes having an ip address conflict with another server. Once the ip address was corrected, we were able to review the logs and address classpath errors. Thanks everyone for their suggestions.
... View more
10-04-2016
04:09 PM
Thank You for the suggestion. This appears to be plausible. Will allocate additional memory and post an update.
... View more
10-04-2016
04:08 PM
Thank you for the suggestion. I have verified the hostname and port for mapreduce.jobhistory.address and they are accurate. We are using the copy of the config from the cluster.
... View more
10-03-2016
07:37 PM
2 Kudos
We installed and configured HDP-2.5.0.0. When we run a mapreduce job it fails with the diagnostics message "We crashed durring a commit". The status on Map and Reduce tasks indicate that they were successful. Following is the error observed in syslog output. How do we troubleshoot this error?Any tips will be appreciated. 2016-10-03 15:19:20,017 FATAL [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
java.io.IOException: Was asked to shut down.
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$5.run(MRAppMaster.java:1559)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1553)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1486)
2016-10-03 15:19:20,021 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting with status 1
... View more
Labels:
- Labels:
-
Apache YARN