Member since
06-24-2014
45
Posts
9
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
965 | 06-27-2016 08:57 PM |
07-01-2016
10:01 AM
You use the phrase "Create a Local Ambari Repository". that is not what you are doing. You are merely pointing you machines to look at an external repository. It would be great if you did create a local ambari repository - but that would require a lot more explanation. I had a quick look at Nexus for this, but found it was not trivial to set up for my purposes.
... View more
06-28-2016
07:01 PM
To partly answer my own question... Hadoop doesn't need DNS if all the machines are already in the hosts file. So at some point in the above the /etc/hosts file was populated with 192.168.0.11 ambari1.mycluster ambari1 192.168.0.12 master1.mycluster master1 192.168.0.21 slave1.mycluster slave1 192.168.0.22 slave2.mycluster slave2
... View more
06-28-2016
06:49 PM
This is great. It will save me lots of time. I am trying this on an Ubuntu host, not a Mac, and everything is fine until I get down to trying to access ambari1 through the web. I don't have ambari1 in DNS anywhere. I can "vagrant ssh ambari1" and find the IP address, but presumably that wont let me install without FQDNs. Any ideas? Thanks again.
... View more
06-27-2016
08:57 PM
OK, sorry I solved this. I discovered that Sqoop client was on those two machines - and had not been restarted. I may have forgotten it when doing it manually.
... View more
06-27-2016
08:54 PM
[EDIT: Solved - but I am leaving the question in case anyone else hits the strange error message] I am upgrading a HDP cluster from 2.2 to 2.3 using a manual upgrade process. I have reached the end and perform the final step but ambari tells me that two of my machines are not upgraded. I can't see what the problem is, nor can I fix it. Any ideas? [root@hdp01 hdp]# ambari-server set-current --cluster-name=owalhdp --version-display-name=HDP-2.3.0.0 Using python /usr/bin/python2.6 Setting current version... Enter Ambari Admin login: admin Enter Ambari Admin password:
ERROR: Exiting with exit code 1.
REASON: Error during setting current version. Http status code - 500.
{
"status" : 500,
"message" : "org.apache.ambari.server.controller.spi.SystemException: Finalization failed. More details: \nSTDOUT: Begin finalizing the upgrade of cluster owalhdp to version 2.3.0.0-2557\nThe following 2 host(s) have not been upgraded to version 2.3.0.0-2557. Please install and upgrade the Stack Version on those hosts and try again.\nHosts: hdp01.FQDN, hdp04.FQDN\n\nSTDERR: The following 2 host(s) have not been upgraded to version 2.3.0.0-2557. Please install and upgrade the Stack Version on those hosts and try again.\nHosts: hdp01.FQDN, hdp04.FQDN\n"
} Now nothing in my manual process really tells me how to install and upgrade the Stack Version on individual hosts so I am at a loss for what to do.
... View more
Labels:
04-28-2016
02:26 PM
Thanks people. That is very helpful. It sounds like I have some learning to do 🙂
... View more
04-28-2016
10:49 AM
I am thinking about setting up a Logstash infrastructure to monitor my system. (It happens to be Hortonworks HDP Hadoop cluster, but assume it isn't). So I have various things which generate logs and I want to transfer these logs outside my system to a new system - such as ElasticSearch inside Logstash. And I want to do this securely. I don't really want Flume for this as there are better tools. Now I might use Logstash forwarders - which most recently seems to be a new system called "Beats" - in particular FileBeat. However I would prefer to use Apache NiFi because of its security reputation. I would like to use HDF as I am a Hortonworks Partner and we are already using HDP. Can anyone say: "Yes this makes sense", "Yes, I have done it", "You need to read URL blah blah blah"? Or have I got the wrong end of the stick? PS I know that Ambari Metrics moves operational logs from the Hadoop cluster into the HDFS system - this is separate from that.
... View more
Labels:
- Labels:
-
Cloudera DataFlow (CDF)
01-07-2016
02:27 PM
Thanks! Sounds like my sandbox is out of date now 🙂
... View more
01-07-2016
11:34 AM
1 Kudo
I've been trying to figure out what is the best/most appropriate Spark Notebook (web based IDE) for use on an HDP cluster. Is there one available through Hortonworks packages? I have found a number of training lessons from Hortonworks which talk about installing iPython Notebook (which can work for python+spark as well as scala+spark). However they seem to assume you are on a sandbox and are compiling and installing the software they demand, not checking what is already there and not using packages. I have also seen mentions of Zeppelin - an Apache incubating project. This is good too - but I have the same problem - I have to build it myself and it doesn't for me. There are also third party Notebooks such as the Cloud based one from Data Bricks. Any recommendations? What has worked for you?
... View more
Labels:
- Labels:
-
Apache Spark
- « Previous
-
- 1
- 2
- Next »