Member since
05-30-2018
1322
Posts
715
Kudos Received
148
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 4062 | 08-20-2018 08:26 PM | |
| 1957 | 08-15-2018 01:59 PM | |
| 2382 | 08-13-2018 02:20 PM | |
| 4130 | 07-23-2018 04:37 PM | |
| 5041 | 07-19-2018 12:52 PM |
10-05-2016
07:55 PM
1 Kudo
@vshukla Has this been backported into 0.6.0 which is version of zeppelin in hdp 2.5?
... View more
10-05-2016
04:51 AM
I have not tried this yet but just a suggestion. @Oliver Meyn have you tried using putHDFS processor? from the target cluster pull the 'core-site.xml' and 'hdfs-site.xml' and store them in a location on the your nifi cluster. reference them in the processor. verify the dns is resolved. if dns can can not be resolved them use IP in site.xml
... View more
10-05-2016
04:18 AM
1 Kudo
@chandramouli muthukumaran Good question. Let me add Apache NiFi as one of your options as it is the "easiest" to implement as you would orchestrate entire data flow with a neat UI. NiFi was design for real time stream simple event processing. So if you use case is within this realm nifi is the way to go. moreover with MiNiFi you can have the process running on the small device (footprint ~40mb) which push data to data center. If you require complex event processing, HDF now comes with storm. so with nifi you get ease of operations, development, data linage, and message guarantee, and highly resilient solution. Oh not to mentioned back pressure when target repository is down. Spark - if you require only complex event processing and can handle microbatching (as little as .5 second) then spark may be good fit.. Spark streaming is in my opinion easier to develop in then storm. No UI Storm - VERY powerful complex stream processing engine with virtually zero latency. Storm now comes with capability to do rolling and tumbling window. Also latest release has back pressuring. No UI. Hope that was helpful to start with.
... View more
10-04-2016
02:31 PM
2 Kudos
@Mahesh Mallikarjunappa When you use pig to load into hbase use org.apache.pig.backend.hadoop.hbase.HBaseStorage, when you use hive to load into hbase use used org.apache.hadoop.hive.hbase.HBaseStorageHandler. Both are for those specific technolgoies.
... View more
10-04-2016
04:18 AM
that article is not available any longer. not sure why. which GC options where invalid or conflicting?
... View more
09-29-2016
09:54 PM
@Josh Elser extremely helpful article. nice work
... View more
09-29-2016
04:35 AM
I found the issue. This is a bug. Engineering working on issue.
... View more
09-29-2016
03:09 AM
@Vasilis Vagias Hive external tables should show up in atlas. This functionality worked recently. Not sure what happened. @Anderw Ahn Please confirm my understanding.
... View more
09-28-2016
06:30 PM
1 Kudo
HDP 2.5. I have run hive-import.sh. Then i made some changes like added new table and created view on that table. The new table and view don't come up while searching under DSL hive_table The new table shows up in general search but is missing schema I re-ran import-hive.sh and the columns still do not show up
... View more
Labels:
- Labels:
-
Apache Atlas
09-28-2016
03:09 AM
@Ali Bajwa tons of great stuff in this article
... View more