Member since
06-14-2016
38
Posts
4
Kudos Received
0
Solutions
12-19-2016
09:20 AM
I can´t see that property explicitely defined as "False" in hbase-site.xml, so I can suppose it is True. We are using HDP 2.3.2.0-2950, wich includes HBase 1.1.2, so I think it is defined as True by default. Is it?
... View more
12-16-2016
11:01 AM
Hi I would like to know if these commands disable the table before activating/deactivating the replication of families so that services using it get stuck. I know to activate replication manually I have to first disable the table, change to "1" the colum "REPLICATION_SCOPE" and then enable again, so I would like to know if these commands do it automatically...I suppose so, but I want to be sure to calcule impact on services using the table. Regards, Silvio
... View more
Labels:
- Labels:
-
Apache HBase
06-21-2016
07:51 AM
Ok, issue is resolved. I had to explicitly add the custom property "hbase.replication=false" to hbase-site.xml (although we have no replication at all and no peers configured) and restart HBase masters. After this, about 50 TB of data in oldWALs folder were deleted automatically in about 10 minutes 🙂 Thank you very much to all of you, you helped me a lot 🙂
... View more
06-20-2016
04:31 PM
Yes, I know. I was thinking about "config groups" in Ambari. Using such, maybe I could use an independent HDFS filesystem for the backup and use the same zookeepers for replication.... maybe complex... Yes, maybe a new hole cluster would be the best solution...I think I'll do that. Thank you very much for your support
... View more
06-20-2016
03:23 PM
Ok, thank you very much for your answers. Looking at http://blog.cloudera.com/blog/2013/11/approaches-to-backup-and-disaster-recovery-in-hbase/, I think "HBase replication" is my solution: almost no impact, incremental backups... On the other side, we are currently creating snapshots of tables in a daily manner. I am creating a new cluster for this. I was thinking about a 2 node cluster, one as Master node with all master roles (Hbase master, zookeeper...) and one Data Node with enough storage for backup data My question is: - Should it be totally independent, with all roles installed or can I connect it to my main cluster under Ambari umbrella? I need it only for backup, I'm not going to use it for production if something happens to my main production cluster Regards, Silvio
... View more
06-20-2016
02:23 PM
Ok, so, if we would need to backup only HBase tables, we could use HBase replication and it would not be neccesary the use of distcp2, right? I suppose HBase replication copies underneath HDFS data too
... View more
06-20-2016
02:16 PM
Well, "archive" folder under /apps/hbase/data" remains "under control" and doesn't grow. My problem is "oldWALs" under same path. I don't have any kind of replication
... View more
06-20-2016
01:56 PM
Hi, This is the output you request: [zk: localhost:2181(CONNECTED) 4] ls /hbase/replication [peers, rs] [zk: localhost:2181(CONNECTED) 5] ls /hbase/replication/peers [] No replication to other peers
... View more
06-20-2016
12:59 PM
Hi Vijaya, thanks for your answer. But if HBase data is in HDFS (I can see HBase tables as folders in HDFS strusture), if I replicate HDFS am I not replicating HBase data too? I think I still have much to read and learn... On the other side, does distcp2 supposes performance impact? Regards, Silvio
... View more
- « Previous
-
- 1
- 2
- Next »