Created 01-03-2016 08:38 PM
Hi all!
I just download the latest sandbox.hortonworks.com HDP 2.3.2 but when started and running for an hour I still have some alerts on the ambari dashboard! Anybody has the same problem?
Here are what I can copy from the ambari dashboard:
Connection failed on host sandbox.hortonworks.com:10000 (Execution of 'ambari-sudo.sh su ambari-qa -l -s /bin/bash -c 'export PATH='"'"'/usr/sbin:/sbi
Metastore on sandbox.hortonworks.com failed (Execution of 'ambari-sudo.sh su ambari-qa -l -s /bin/bash -c 'export PATH='"'"'/usr/sbin:/sbin:/usr/lib/a
Ambari
There are 21 stale alerts from 1 host(s): History Server RPC Latency, HiveServer2 Process, Ranger Usersync Process, WebHCat Server Status, Ambari Agent Disk Usage, History Server Web UI, App Timeline Web UI, Hive Metastore Process, Atlas Metadata Server Process, History Server CPU Utilization, History Server Process, ResourceManager Web UI, NodeManager Health Summary, ZooKeeper Server Process, ResourceManager CPU Utilization, NodeManager Health, Flume Agent Status, Ranger Admin Process, NodeManager Web UI, ResourceManager RPC Latency, Metadata Server Web UI CRIT
for about an hour
There are 21 stale alerts from 1 host(s): History Server RPC Latency,
I don't understand why the vm has multiples alerts out of the box!
Is there a way to fix this or is there a vm without any errors?
Created 01-21-2016 08:44 PM
Today I spoke with Robert Molina from Hortonworks and possibly found what is creating all those alerts!
The sandbox is intended to be run on a desktop with a NAT networked interface. I set it up on a dedicaded headless server with a bridge adaptor. Looks like sandbox have a problem with that and that cause some of the services configs to not function properly! As a result some services works but reports network connections alerts! After some config change. The related alerts weren't there anymore. So always use a vm for and how it was intended to be used.
Thanks to the hortonworks team and Robert who wanted to go to the bottom of this.
Conclusion: If you want, like me, to test drive hortonworks on a headless server. Start from scratch and build it! What every sysadmin should do anyways... That's what I'll do this week end...
P
Created 01-20-2016 06:04 PM
I have found out how to resolve the "Connection failed on host sandbox.hortonworks.com" failures for my sandbox.
The fix is to add sandbox.hortonworks.com to your no_proxy variable in /etc/profile.
It turns out I receive this error because I am using sandbox behind corporate proxy, and have http_proxy and https_proxy setup. Even though sandbox.hortonworks.com is resolved to a local ip in the /etc/hosts file, that local ip is not always the localhost, but an IP in range 10.0.X.X if you are using NAT. Therefore, the request is hitting the proxy first, which resulted in a failure.
Created 01-20-2016 06:10 PM
You can check sandbox setup here https://community.hortonworks.com/articles/10788/hortonworks-sandbox-setup.html
Created 01-20-2016 07:47 PM
@Peter Young After contacting support someone posted the answer to stale alerts here:
https://community.hortonworks.com/questions/9762/how-to-get-rid-of-stale-alerts-in-ambari.html
Created 01-21-2016 08:44 PM
Today I spoke with Robert Molina from Hortonworks and possibly found what is creating all those alerts!
The sandbox is intended to be run on a desktop with a NAT networked interface. I set it up on a dedicaded headless server with a bridge adaptor. Looks like sandbox have a problem with that and that cause some of the services configs to not function properly! As a result some services works but reports network connections alerts! After some config change. The related alerts weren't there anymore. So always use a vm for and how it was intended to be used.
Thanks to the hortonworks team and Robert who wanted to go to the bottom of this.
Conclusion: If you want, like me, to test drive hortonworks on a headless server. Start from scratch and build it! What every sysadmin should do anyways... That's what I'll do this week end...
P