Member since
07-28-2016
37
Posts
9
Kudos Received
0
Solutions
01-06-2020
10:11 AM
What version of NiFi are you running? Our source Oracle systems will be upgrading from 12c to 19c and I want to make sure our flows will still work. I'm on NiFi 1.8.0.
... View more
03-14-2018
01:07 PM
Got it, thanks!
... View more
03-08-2018
08:25 AM
1 Kudo
That's completely valid. Let's say that HTTP has the advantage of not requiring any new configuration when installing NiFi. In some environments, adding the use of an additional port can create additional work with network/security teams. Another advantage of HTTP is the possibility to use a proxy if that's required to allow communication between multiple sites. If you expect to have high load with S2S and can manage the extra conf, RAW is certainly a better option. Hope this helps.
... View more
08-03-2018
09:44 PM
@Benjamin Hopp @Chad Woodhead I had the exact same issue and tried bringing down both of my two NiFi nodes, waiting a few minutes, and brought them back online. Then I tried turning on the PutHDFS processor and it worked properly. Has anyone figured out why this solves the issue or what is causing this problem?
... View more
03-14-2018
06:52 PM
Thank you @Chad Woodhead
... View more
01-18-2017
02:49 PM
2 Kudos
Hi @Chad Woodhedad, This is expected. Hive rules in Ranger will apply only during access through HiveServer2 (JDBC). When accessing Hive with Hive CLI, only the HDFS rules in Ranger will apply (no masking then).
... View more
12-23-2016
10:43 AM
2 Kudos
I have installed HDP 2.5 and Atlas 0.7.0.2.5 and I don't need HBase to use Atlas. Currently I'm using Solr (Ambari Infra) as the index engine and Berkeleyje as the storage engine. The real dependency between Atlas and HBase is the AuditRepository, by default use HBase. Is not easy change it, but investigating the source code I found a special property atlas.EntityAuditRepository.impl that you have to set with the value org.apache.atlas.repository.audit.InMemoryEntityAuditRepository (is case sensitive, so copy&paste exactly the name of the property and the value). @Chad Woodhedad, add the above property as the screenshot shows, restart Atlas services and you will have it, Atlas without HBase 🙂 And now some details about how I found that property: In this link from GitHub you can see why Atlas by default needs HBase. And in this link from GitHub you can find the available values that you can use for configure the audit. Don't worry about the rest of the properties related with HBase. Atlas will use the value of atlas.graph.storage.hbase.table to create the table in the storage backend that you choose (Berkeleyje or Cassandra). With these properties my Atlas services work very well. I hope this information is helpful and work for you to avoid install HBase in your clusters.
... View more
12-15-2016
01:24 PM
Thanks. I just did my own testing to see if "for columns" would also update TABLE_PARAMS table and I found that it did not. For instance, when I run "analyze table svcrpt.predictive_customers compute statistics;" the column transient_lastDdlTime in the table TABLE_PARAMS gets updated, but if I run "analyze table svcrpt.predictive_customers compute statistics for columns;" transient_lastDdlTime does not updated. So does this mean "for columns" does not update the basic stats?
... View more