Member since
06-09-2016
185
Posts
22
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2164 | 04-21-2017 07:57 AM | |
1349 | 04-18-2017 07:07 AM | |
3194 | 02-27-2017 05:41 AM | |
891 | 12-09-2016 11:05 AM | |
1257 | 11-24-2016 11:20 AM |
12-12-2016
06:23 PM
@Avijeet Dash - Terry made all good points. Note that using SolrCloud does not require using HDFS. SolrCloud can also use local storage and it is not uncommon. Sometimes people misunderstand when we don't point this out. The optimal choice of HDFS vs local depends on the use case, but local storage is usually preferred over HDFS if your index has a high level of updates/adds. SolrCloud automatically replicates your data and is fault tolerant, but still, SolrCloud has the advantages Terry mentioned.
... View more
12-13-2016
05:43 PM
2 Kudos
HBase column families have a time-to-live (TTL) property which, by default, is set to FOREVER. If you wanted to delete the HBase cell values a week after being inserted, you could set the TTL to 604800 (which is the number of seconds in a week: 60 * 60 * 24 * 7).
Here's an example:
Create a table where the column family has a TTL of 10 seconds: hbase(main):001:0> create 'test', {'NAME' => 'cf1', 'TTL' => 10}
0 row(s) in 2.5940 seconds
Put a record into that table: hbase(main):002:0> put 'test', 'my-row-key', 'cf1:my-col', 'my-value'
0 row(s) in 0.1420 seconds If we scan the table right away, we can see the record: hbase(main):003:0> scan 'test'
ROW COLUMN+CELL
my-row-key column=cf1:my-col, timestamp=1481650256841, value=my-value
1 row(s) in 0.0260 seconds 10 seconds later, the record has disappeared: hbase(main):004:0> scan 'test'
ROW COLUMN+CELL
0 row(s) in 0.0130 seconds So, perhaps you could use TTL to manage your data retention.
... View more
12-12-2016
04:46 PM
You can use hbase lily indexer, which will atomically index data from hbase into solr. You can also use apache nifi, ingest once, and fork to solr, hbase, hdfs, and hive. highly flexible implemenation model
... View more
12-12-2016
01:13 PM
Yes. (They are installed under the covers when installing Atlas service).
... View more
12-12-2016
04:55 AM
Hi @Artem Ervits, do you know what is the backend for Falcon, there was a talk on having a common store for lineage information, currently Atlas uses HBASE. thanks.
... View more
12-09-2016
10:02 AM
1 Kudo
Atlas captures Hive and falcon metadata once Hive and falcon are configured with Atlas hook. Regarding hive,all entity operations are captured by Atlas (create,drop,alter etc). For falcon, Atlas captures only submit falcon entity as of now. When you delete a falcon entity , the falcon hook doesn't send any notification to Atlas. For more technical details please refer http://atlas.incubator.apache.org/Bridge-Falcon.html
... View more
12-09-2016
01:42 PM
The latest tutorial is here.
http://hortonworks.com/apache/falcon/#tutorials It does not reflect new capabilities for Falcon 0.10 in HDP 2.5. (This tutorial update is being worked on). Falcon uses hive/hdfs as backend store configs.
https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_data-movement-and-integration/content/ch_hdp_data_mgmt_falcon_overview.html http://falcon.apache.org/FalconDocumentation.html#
... View more
12-09-2016
11:05 AM
I restarted the falcon server, gave correct configurations and falcon user for the /working, /staging directory owners, it worked fine.
... View more