Member since
06-07-2016
923
Posts
322
Kudos Received
115
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2795 | 10-18-2017 10:19 PM | |
3152 | 10-18-2017 09:51 PM | |
12348 | 09-21-2017 01:35 PM | |
1046 | 08-04-2017 02:00 PM | |
1346 | 07-31-2017 03:02 PM |
11-25-2016
06:30 PM
Awesome. Thanks for sharing.
... View more
11-21-2016
06:22 AM
if you are able to ssh without password, your /etc/hosts file shouldn't matter. I am assuming it is working because like you said, you are able to ssh without password.
... View more
11-21-2016
06:16 AM
@Sridhar M This is the last thing I can suggest which I don't think should make a difference if your ssh works from putty/ssh terminal. Change permissions on "id_rsa" to 400. It is currently set at 600 which should be okay. The other thing is the host names you are providing in ambari should be same as in your known_hosts file. Otherwise, delete known_hosts file on ambari node, connect to all other nodes using the exact same name that you are using in Ambari. It will ask you to confirm that you trust this host and will ask you to add to known host file. Say yes, and then try again. If it still doesn't work, please share your ambari screen where you are specifying host names and ssh key and user name (root).
... View more
11-21-2016
05:21 AM
Here is the thing. You are able to login without password from outside using "root" user. Right? Then it should not fail with this error. Did you downloaded the id_rsa file and imported into ambari from your machine or did you copied and pasted the content? If you copied and pasted the content then it should include everything in the file including "-----------------BEGIN CERTIFICATE--------------" and "------------------------END CERTIFICATE--------------------"
... View more
11-21-2016
05:12 AM
@Sridhar M Are you saying you are providing /root/.ssh/id_rsa.pub to ambari? If yes, then that's your problem. You need to provide your private key, not public key. You need to provide /root/.ssh/id_rsa to ambari. Notice that it asks you for private key, not public key.
... View more
11-21-2016
05:05 AM
@Sridhar M Are you sure about ssh working on all nodes without password. That is you are able to ssh from your node where Ambari is running to all other nodes without a password using root? Can you please confirm? Finally in your ambari, when you provide ssh key for root user, you need to provide your private and not public key. Is that how you are doing it?
... View more
11-19-2016
06:33 AM
1 Kudo
@Jan K Couple of things here (Make sure you read my last paragraph). 1. The purpose of secondary namenode is to avoid the edit log from growing too big by merging it with fsimage periodically. So it merges fsimage with edit log and keeps a check point of last merge (1 hour or 1 million transactions by default). This helps keep the edit log from growing too big. This helps restart of namenode faster as namenode starts from fsimage. 2. It can be reasonably assumed that you do not have a situation where namenode has been up for last 10 months and your secondary namenode is dead which means fsimage and edit log are far apart. 3. When you start your secondary namenode , it will sync edit file with fsimage and after that everything should be normal. Now, what I don't understand is why you need a secondary namenode? Secondary namenode is from days when Hadoop didn't have standby namenode for fail over (Namenode used to be a single point of failure). What you should do is have a standby namenode along with at least three journal nodes to help you sync data between namenodes. That should be it. You don't need secondary namenode. Secondary namenode means you still have single point of failure. Standby namenode means no single point of failure. It also means you are highly unlikely to lose metadata in the event of disk failure because journal nodes will be spread on three machines. May be you already have standby namenode and journal nodes to sync between active and standby namenodes. And that's why probably nobody cared about secondary namenode.
... View more
11-06-2016
02:03 PM
@Anindya Chattopadhyay Could be a connectivity issue from your VM. Did you go over the following? http://hortonworks.com/hadoop-tutorial/learning-the-ropes-of-the-hortonworks-sandbox/ To make your tutorial work and go to next step, just copy and paste the JSON using a choose a JSON here button instead.
... View more
11-05-2016
02:27 AM
1 Kudo
@Simran Kaur Please try hadoop dfsadmin -refreshNodes but I have seen another question from you I believe in which you are asking if you can add a node in a different data center. If this is the same cluster then this is not supported and I highly recommend that you don't spend too much time on it trying to make it work.
... View more
11-05-2016
02:10 AM
You cannot do this. This is not supported as @Artem Ervits has already stated. Imagine what would happen to writes when clusters span multiple data centers? Remember, networks are assumed to be unreliable and unsecured. Now, I hate to say this and please don't do it as it is unsupported but amazon offers VPC which makes AWS an extension of your network using a VPN.
... View more