Member since
09-24-2015
816
Posts
488
Kudos Received
189
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3171 | 12-25-2018 10:42 PM | |
| 14192 | 10-09-2018 03:52 AM | |
| 4763 | 02-23-2018 11:46 PM | |
| 2481 | 09-02-2017 01:49 AM | |
| 2910 | 06-21-2017 12:06 AM |
04-13-2016
01:20 AM
Hi @Ram Veer, great news! Please consider to accept/up-vote my answer above. Tnx!
... View more
04-12-2016
10:24 PM
3 Kudos
@jbarnett, (1) Yes, putting Flume on dedicated nodes is definitely the way to go. Both your Flume apps and your Data nodes will benefit from it, and you can scale Flume independently of the rest of the cluster. (2) Again, yes, there is a downside regarding HDFS locality but it's a small one in comparison to gains obtained by (1). And it only concerns HDFS sinks. Once you start using for example Kafka you will hava Kafka sinks and no concerns of that kind.
... View more
04-12-2016
02:11 PM
Hi @Alex Raj, were the answers helpful? If so, please consider to accept one and/or up-vote them.Tnx!
... View more
04-12-2016
01:48 PM
1 Kudo
Are all of your data nodes healthy and have enough available disk space? For some reasons writing block to one of them fails and beacuse your replication factor is 2 and replace-datanode-on-failure.policy=DEFAULT, NN will not try another DN and write fails. So, first make sure your DNs are all right. If they look good then try to set dfs.client.block.write.replace-datanode-on-failure.policy=ALWAYS
dfs.client.block.write.replace-datanode-on-failure.best-effort=true The second one works only in new versions of Hadoop (HDP-2.2.6 or later). See this and this for details.
... View more
04-12-2016
11:16 AM
1 Kudo
Hi @nyakkanti, see this for Ambari REST API calls to automate ldap-sync and run it using cron.
... View more
04-12-2016
10:08 AM
1 Kudo
Hi @Divya Gehlot, go to HBase -> Quick Links -> HBase Master UI, then select Table details on the top, locate and click on your table. It will show you the table regions, their server layout, and number of requests per region. You can then consider to split too busy regions, and move some regions to another nodes for a better load balancing. Refer to this for split/move, and to this for a good backgrounder. Since you have only 3 nodes the results might be limited. Regarding other properties, if you can afford, be sure to have enough RAM for Region servers, not less than 16G.
... View more
04-12-2016
06:10 AM
Okay, great! Though, I'm surprised by your choice of best answer... Anyway, enjoy Zeppelin.
... View more
04-12-2016
05:52 AM
1 Kudo
Okay, on a maching with Internet connection, and git installed do this: mkdir ZEPPELIN
git clone https://github.com/hortonworks-gallery/ambari-zeppelin-service.git ZEPPELIN
Then upload the ZEPPELIN folder (it has 14M, you can transfer it zipped) under /var/lib/ambari-server/resources/stacks/HDP/2.4/services on your Ambari server node, make sure the owner of all files in ZEPPELIN is root:root and restart ambari-server.
... View more
04-12-2016
02:38 AM
1 Kudo
Hi @Anandha L Ranganathan, Zeppelin service is not shipped with Ambari. You can add it by following the steps here. Note that it's an expermental feature, not recommended for production environments. Edit: Check also this.
... View more
04-12-2016
01:45 AM
2 Kudos
Have you followed Ambari Upgrade guide, and could you complete all steps without errrors? Ambari upgrade is easy, provided that you had started with a healthy cluster. That's why the very first step of every Ambari upgrade is to inspect the cluster, make sure that all components are running and clear all alerts and warnings. After that back-up all cluster supporting data-bases, and follow these steps: Stop Ambari server and Ambari agents on all hosts Prepare and distribute a new version of ambari.repo to all cluster nodes Upgrade Ambari server (on CentOS/RHEL run yum upgrade ambari-server), confirm it was successful, see Step 6 in the above link. Upgrade Ambari agents on all nodes (yum upgrade ambari-agent), confirm it was successful, see Step 8 in the above link. Upgrade Ambari data-base, run "ambari-server upgrade" "ambari-server start" "ambari-agent start" on all nodes in the cluster All above steps were supposed to complete without errors. Was that so? What you can do now: Repeat Steps 6 and 8 from the guide to make sure you have correct versions of Ambari server and agents Make sure Ambari server is running: ambari-server status Make sure all Ambari agents are running: run "ambari-agent status" on all nodes Open Ambari web UI, if you still have hearbeat lost question marks, then something is wrong with Ambari agents and/or server. Inspect logs for errors: /var/log/ambari-server and /var/log/ambari-agent. Once Ambari is running you can begin starting services: start them one by one, ZooKeeper first, then HDFS, then all the others. Inspect Ambari tasks, and drill down for errors if they fail (red markings). Hope this helps.
... View more