Member since
06-10-2016
8
Posts
0
Kudos Received
0
Solutions
07-01-2016
10:32 AM
@vpoornalingam One more doubt : where should I be putting History server, App timeline server and resource manager?
... View more
06-30-2016
05:27 AM
@vpoornalingam Many thanks for the prompt answer!
... View more
06-30-2016
05:26 AM
@Scott ShawThanks for the prompt answer! Among the frameworks required for this cluster (Hadoop, Hive, Pig, Oozie, HBase, Zookeeper, Spark, Storm and Sqoop, Kafka) is there any classification on the basis of intensiveness of IO , Computation and Memory? I might be wrong but are there frameworks in the list which would be both IO intensive and Memory intensive? Regards, Rahul
... View more
06-29-2016
11:22 AM
Dear folks, I am currently trying to set up HDP 2.4 on a small cluster for carrying out PoC activities but I am clueless on the aspects of what heuristics to use for assigning masters and assigning slaves and clients after launching install wizard. I had started with documentation provided here: https://docs.hortonworks.com/HDPDocuments/Ambari-2.2.2.0/bk_Installing_HDP_AMB/content/ch_Getting_Ready.html Description of cluster: Small cluster with 8 machines. Each machine having 8 GB RAM, 6-8 cores and 500 GB. One machine I am using for ambari, and rest 7 machines are for namenode, secondary namenode and data nodes. All the nodes are installed with CentOS 6. Availability and reliability aren’t of any concern as it is a PoC cluster where some algorithms will be tested out for its functionality. Frameworks required on cluster: Hadoop, Hive, Pig, Oozie, HBase, Zookeeper, Spark, Storm and Sqoop, Kafka In order to get my feet wet, I had chosen Ambari and HDP2.4.0 and ease of deploying a cluster has been a positive experience till now, with nice documentation and my decent knowledge of Linux. Going forward I wanted to know from experts on what heuristics and logic do they use for assigning masters and slaves. Most of the resources that I have found on this community and elsewhere discuss about the heuristics on system configurations (RAM, memory and cores) and are pretty logically concluded for a heterogeneous cluster and the takeaways are important heuristics which could make clusters efficient. But given a homogeneous cluster, I am totally clueless about how to proceed. Any concrete or abstract ideas is much appreciated. Best Regards, Rahul
... View more
Labels:
06-13-2016
05:03 AM
Many thanks! That works like a charm. Probably we can amend the documentation to include that it needs to be added to AMBARI_JVM_ARGS. BR, Rahul
... View more
06-10-2016
12:38 PM
I ideally would consider them being a part of some environment variable but which env variable?
... View more
06-10-2016
12:34 PM
I am uploading the content as an image with username and password blurred.
... View more
06-10-2016
12:15 PM
Dear folks, I am currently trying to set up HDP 2.4 but I am having issues after launching install wizard. I had started with documentation provided here: https://docs.hortonworks.com/HDPDocuments/Ambari-2.2.2.0/bk_Installing_HDP_AMB/content/ch_Getting_Ready.html I had not faced any issues until now. I am currently in section 3 where it mentions to Installing, Configuring and deploying a HDP cluster. Here I opened a browser on my desktop and opened ambari host machine and logged into ambari. Then, I launched installation wizard and setup my cluster name. Further, I selected stack HDP 2.4 and enabled RHEL6 in repository options for my centos6.5 machine. I observe that when I try to wget the URL mentioned for HDP 2.4 and HDP utils they work fine , but it gives the error in wizard and hence not able to proceed. I am working behind proxy, and I thought it could be an issue and therefore I stumbled upon the link http://docs.hortonworks.com/HDPDocuments/Ambari-2.2.1.0/bk_ambari_reference_guide/content/_how_to_set_up_an_internet_proxy_server_for_ambari.html which mention how to provide proxy info in the ambari-env.sh file. I did that but during ambari-server restart it throws an error that -Dhttp.proxyHost=<myproxyaddress> command not found Any help in this regard is appreciated. Regards, Rahul
... View more
Labels: