Member since
09-23-2015
800
Posts
898
Kudos Received
185
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5395 | 08-12-2016 01:02 PM | |
2200 | 08-08-2016 10:00 AM | |
2602 | 08-03-2016 04:44 PM | |
5495 | 08-03-2016 02:53 PM | |
1418 | 08-01-2016 02:38 PM |
02-04-2016
12:09 AM
@S Roy I do have a deployment where we setup HBASE DR using kafka as suggested above. I was under the impression that you are more focused on Cluster HA instead HBASE only. Apache Falcon is one of my favorites but its more Active-Passive.
... View more
03-22-2017
10:05 AM
Happy to recommend Attunity Replicate for DB2. Need to deploy Attunity AIS onto the source server as well when dealing with mainframe systems though, but the footprint was minimal (after the complete load has happened, Replicate is just reading the DB logs after that point). Have used with SQL Server as well (piece of cake once we met the pre-requisites on the source DB) and IMS (a lot more work due to the inherent complexities of hierarclical DB e.g. logical pairs, variants but we got it all working once we'd uncovered all the design 'features' inherent to the IMS DB's we connected to. Can write to HDFS or connect to Kafka but I never got a chance to try them (just wrote csv files to edge node) due to project constraints alas
... View more
02-05-2016
12:15 AM
@Bidyut B unless you use hue there is no other UI to execute Oozie wf. You need to use she'll to submit them.
... View more
02-02-2016
02:35 PM
Most likely was this. Sorry for not accepting for so long but when I changed the <falconfolder>/staging/falcon folder manually it all worked and I forgot about it Thanks a lot. https://issues.apache.org/jira/browse/FALCON-1647
... View more
01-28-2016
05:06 PM
And a falcon restart fixed that. The retention cleanup now works. Thanks a lot
... View more
07-18-2018
07:40 AM
synchronize the Tez configurations on all nodes, and restart hiveserver2, it should work fine.
... View more
07-11-2016
09:50 AM
@Sunile Manjee yeah it works Alex tested it as well. Falcon does not point to one RM. It goes to the yarn-site.xml and finds the RM couple that has one of the ones you specified. Then it tries both.
... View more
01-04-2016
11:40 AM
2 Kudos
If you have 4 nodes he will not be able to replicate 8 copies.It looks like some tmp files from accumulo. While I do not know accumulo too well some small files like jars normally have a high replication level so they are locally available on most nodes. You can check the filesystem with: hadoop fsck / -files -blocks -locations Normally programs honor a parameter called max replication which in your case should be 4 but it seems like accumulo doesnt always do that. https://issues.apache.org/jira/browse/ACCUMULO-683 Is this causing any problems? Or are you just worried about the errors in the logs.
... View more
02-02-2016
03:02 PM
@Benjamin Leonhardi I'm trolling you
... View more
- « Previous
- Next »