Member since
09-18-2015
3274
Posts
1159
Kudos Received
426
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2567 | 11-01-2016 05:43 PM | |
| 8498 | 11-01-2016 05:36 PM | |
| 4859 | 07-01-2016 03:20 PM | |
| 8179 | 05-25-2016 11:36 AM | |
| 4334 | 05-24-2016 05:27 PM |
11-06-2015
11:33 AM
@jeff@hortonworks.com @mahadev@hortonworks.com @bganesan@hortonworks.com @bdurai@hortonworks.com @sneethiraj@hortonworks.com Please see this thread.
... View more
11-05-2015
11:58 PM
2 Kudos
@Scott Shaw This will make life easier..gist link yum install expect* #!/usr/bin/expect spawn ambari-server sync-ldap --existing expect "Enter Ambari Admin login:" send "admin\r" expect "Enter Ambari Admin password:" send "admin\r" expect eof
... View more
11-05-2015
08:41 PM
@hkropp Can we test the following answer?
... View more
11-05-2015
06:53 PM
Could you share delete statements for oracle? @Scott Shaw
... View more
11-05-2015
06:43 PM
@Paul Codding This is helpful. Thanks! I believe , delete statement is not good idea to run..Comments?
... View more
11-05-2015
06:21 PM
1 Kudo
@Scott Shaw I believe you need to clean it up from ambari database. for example: [root@nsfed01 ~]# psql dbname username Password for user ambari: psql (8.4.20) Type "help" for help. ambari2112=> \dt ambari2112=> select * from users where ldap_user=1; user_id | principal_id | ldap_user | user_name | create_time | user_password | active | active_widget_layouts ---------+--------------+-----------+-----------+-------------+---------------+--------+----------------------- (0 rows) delete from users where ldap_user=1; I aambari2112=> select * from users;
... View more
11-05-2015
05:22 PM
1 Kudo
@Steve Shilling You may want to look into this It's not easy task to estimate this ...I would say 1 month (Just picked up random number)
... View more
11-05-2015
03:36 PM
1 Kudo
@rmaruthiyodan@hortonworks.com Please see the following information and see if your nodes are healthy , connectivity between them is stable. It is desirable for correctness of the system that only one NameNode be in the Active state at any given time. Importantly, when using the Quorum Journal Manager, only one NameNode will ever be allowed to write to the JournalNodes, so there is no potential for corrupting the file system metadata from a split-brain scenario. However, when a failover occurs, it is still possible that the previous Active NameNode could serve read requests to clients, which may be out of date until that NameNode shuts down when trying to write to the JournalNodes. For this reason, it is still desirable to configure some fencing methods even when using the Quorum Journal Manager.'
Further reading can be found here: http://hadoop.apache.org/docs/r2.5.1/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html#Automatic_Failover
If NameNode remains unresponsive for long enough, ZooKeeper notices, and gives control to the HA backup NameNode. The backup NameNode increments the epoch-count in the journal nodes (as it should) and takes over control of HDFS. Eventually, the AD call returns, and the former NameNode wakes up, notices that the epoch-count in the journal nodes has inexplicably increased by one, and shuts itself down, as it should do in response to this condition. (It is designed to do this to avoid two NameNodes in a split-brain situation.)
... View more