Created 09-08-2017 09:32 AM
I need help with hbase high availability for apache hadoop , as i have done HA successfully for HDFS but unable to do for hbase.
Also need help with hbase-site.xml
Kindly provide me steps to go for it.
Below are the following.
coresite.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://ha-cluster</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/opt/hadoop/HA/data/jn</value>
</property>
</configuration>
hdfs site.xml
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>/opt/hadoop/HA/data/namenode</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
<name>dfs.nameservices</name>
<value>ha-cluster</value>
</property>
<property>
<name>dfs.ha.namenodes.ha-cluster</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.ha-cluster.nn1</name>
<value>nn1.cluster.com:9000</value>
</property>
<property>
<name>dfs.namenode.rpc-address.ha-cluster.nn2</name>
<value>nn2.cluster.com:9000</value>
</property>
<property>
<name>dfs.namenode.http-address.ha-cluster.nn1</name>
<value>nn1.cluster.com:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.ha-cluster.nn2</name>
<value>nn2.cluster.com:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://nn1.cluster.com:8485;nn2.cluster.com:8485;dn1.cluster.com:8485/ha-cluster</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.ha-cluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>nn1.cluster.com:2181,nn2.cluster.com:2181,dn1.cluster.com:2181</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>shell(/bin/true)</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/hadoop/.ssh/id_rsa</value>
</property>
</configuration>
hbase site.xml (No idea if its correct unable to get hbase UI for now)
<configuration>
<property>
<name>hbase.master</name>
<value>test-hmaster-1-aws.icare.com:60000</value>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://ha-cluster/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>test-hmaster-1-aws.icare.com,test-hslave-1-aws.icare.com,test-kerberos-aws.icare.com</value>
</property>
</configuration>
Created 11-01-2017 12:30 PM
Issue has been resolved as we have to enable Backup-Master for Hbase.
Created 09-08-2017 10:42 PM
You can refer to following guide to setup HA using Ambari
Created 09-11-2017 04:57 AM
I have to setup HA for hbase in apache hadoop , not using HDP and not using Ambari
Created 09-11-2017 06:13 AM
Unfortunately you went the wrong road. Troubleshooting such an orthodox implementation becomes a nightmare, it advisable to use standard management tool like Ambari when implementing HDP clusters.
Folks here could easily guide you and locate the config files easily and hence help out.
Created 09-11-2017 10:53 AM
I understand @Geoffrey Shelton Okot , but its client requirement and has to work accordingly. Ive already proposed Hortonworks and to implement it will take time, for now i have to enable HA for hbase in the current apache hadoop environment.
Created 11-01-2017 12:30 PM
Issue has been resolved as we have to enable Backup-Master for Hbase.