Member since
02-03-2016
20
Posts
18
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2673 | 05-10-2016 10:46 AM |
03-20-2016
10:30 PM
Yep, that could work. Putting it in HBase could also allow you to maintain some version of the record, too. Good luck and feel free to share more.
... View more
03-09-2016
09:29 PM
3 Kudos
In HBase, you do not have to pre-declare the set of columns as you would in a RDBMS. You can have each row have a different set of columns which is one of the powerful features of HBase. Phoenix exposes this, through a feature called "dynamic columns". You can declare a set of columns in the Phoenix table schema, but at query time or insertion time, you can do querying by specifying the columns on-the-fly. Check out https://phoenix.apache.org/dynamic_columns.html for syntax.
... View more
02-05-2016
11:59 PM
1 Kudo
@S Roy change column family for those types of records or set TTL explicitly per cell Link
... View more
02-03-2016
11:15 PM
2 Kudos
Hi @S Roy, using hdfs mounted as nfs would be a bad idea. An HDFS service writing its own logs to HDFS could deadlock on itself. As @Neeraj Sabharwal suggested, a local disk is best to make sure the logging store does not become a performance bottleneck. You can change the log4j settings to limit the size and number of the log files thus capping total space used by log files. Also you can write a separate daemon to periodically copy log files to HDFS for long term archival.
... View more
02-04-2016
12:09 AM
@S Roy I do have a deployment where we setup HBASE DR using kafka as suggested above. I was under the impression that you are more focused on Cluster HA instead HBASE only. Apache Falcon is one of my favorites but its more Active-Passive.
... View more
02-04-2016
08:02 PM
You should aim for having at most 10-20GB regions. Anything above that would cause compactions to increase the write-amplification and have <1000 regions per server. I suggest looking into your data model and make necessary adjustments to make sure that you are not doing the anti-pattern of incremental writes. See https://hbase.apache.org/book.html#table_schema_rules_of_thumb.
... View more
02-03-2016
08:16 PM
otherwise see this http://hortonworks.com/blog/apache-hbase-high-availability-next-level/
... View more