Member since
09-29-2015
871
Posts
723
Kudos Received
255
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4154 | 12-03-2018 02:26 PM | |
3109 | 10-16-2018 01:37 PM | |
4237 | 10-03-2018 06:34 PM | |
3067 | 09-05-2018 07:44 PM | |
2331 | 09-05-2018 07:31 PM |
12-15-2016
06:16 PM
2 Kudos
Thanks @Karthik Narayanan. I was able to resolve the issue. Before diving into the solutions, I should make the below statement - With NiFi 1.0 and 1.1,
LZO compression cannot be achieved using the PutHDFS processor. The only supported compressions are the ones listed in the compression codec drop down. With the LZO related classes being present in the core-site.xml, the NiFi processor fails to run. The suggestion from the previous HCC post was to remove those classes. It needed to be retained so that NiFi's copy and HDP's copy of core-site are always in sync.
NiFi 1.0
I created the hadoop-lzo jar by building it from sources and added the same to the NiFi lib directory and restarted NiFi.
This resolved the issue and I am able to proceed using the PutHDFS without it erroring out. NiFi 1.1
Configure the processor's additional classpath to the jar file. No restart required.
Note : This does not provide LZO compression, it just can run the processor without ERROR even when you have the LZO classes in the core site.
UNSATISFIED LINK ERROR WITH SNAPPY I also had issue with Snappy Compression codec in NiFi. Was able to resolve it setting the path to the .so file. This did not work on the ambari-vagrant boxes, but I was able to get this working on an openstack cloud instance. The issue on the virtual box could be systemic.
To resolve the link error, I copied the .so files from HDP cluster and recreated the links. And as @Karthik Narayanan suggested, added the java library path to the directory containing the .so files. Below is the list of .so and links
And below is the bootstrap configuration change
... View more
12-09-2016
12:17 PM
1 Kudo
Unfortunately not every property currently supports using the variable registry. The way you can tell is in the documentation for a processor, by looking at the property description. If it says "Supports Expression Language: true" then you can reference variables. For example, with PutHDFS it looks like only the Directory property currently supports it: https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi.processors.hadoop.PutHDFS/index.html
... View more
12-06-2016
12:38 AM
@Bryan Bende @brosander Thanks a lot.
... View more
11-17-2016
05:51 PM
1 Kudo
This is a known problem when Phoenix is enabled, see similar posts here: https://community.hortonworks.com/questions/57874/error-unable-to-find-orgapachehadoophbaseipccontro.html That class is actually from Phoenix: https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/hadoop/hbase/ipc/controller/ServerRpcControllerFactory.java It will be fixed in Apache NiFi 1.1 by allowing users to specify the path to the phoenix client JAR. For now you can copy phoenix-client.jar to nifi_home/work/nar/extensions/nifi-hbase_1_1_2-client-service-nar-1.1.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/ obviously adjusting the directories for your version.
... View more
11-23-2016
04:52 PM
@Andy LoPresto i have generated all the .pems as you suggested and tried to test from openssl command line. It looks like it is able to do the hand shake , but showing a alert\warning towards the end..i am attaching the log from openssl
... View more
11-09-2016
06:26 AM
@Artem Ervits it's dummy data.
... View more
04-05-2019
05:12 AM
Hello, Can I connect the local Nifi also ( not on HDF ) to a remote Hbase which is on HDP ? 1-If yes, then there is no need of changing hbase.regionserver.info.bindAddress from 0.0.0.0 to ip of my HDP in hbase-site.xml and core-site.xml ? 2- I should give both core-site.xml and hbase-site.xml to the controller ? 3- I just used hbase-site.xml and as I said I changed the ip from 0.0.0.0 in this file to my HDP ip and I get an error for my Zookeeper as below. I should set that also ? Thanks, Best regards, Shohreh
... View more
10-25-2016
06:15 PM
3 Kudos
There is no hard-coded limit, but there is definitely some limit in terms of performance. Flow File attributes are held in memory (in addition to being persisted to disk) so more attributes means more Java objects on the heap, means more garbage collection pressure. I don't think there is any way to set what the limit is because it depends on how much data is in the attributes, how much memory you have, how many flow files, etc.
... View more
10-25-2016
02:15 PM
@Juthika Shenoy This error indicates and authorization issue. This is separate from authentication. I would start by looking at your nifi-user.log and see what DN is successfully authenticating by being denied authorization. Then verify that DN is included along with your node identity(s) in the users.xml file. If it is not, then that is your problem. I noticed from your post above you never provided an "Initial Admin Identity" in your authorizers.xml file. This is a must in order to get an initial admin added to the system so that that initial admin can then add additional users via the UI. You can take your user DN from the nifi-user.log and add it to your authorizers.xml file: <property name="Initial Admin Identity">Add user DN Here</property> Also make sure you still have your Node Identity(s) set in the authorizers.xml file as well.
<property name="Node Identity 1">DN From Node 1 Cert Here</property>
<property name="Node Identity 2">DN From Node 2 Cert Here</property>
etc..... If every Node is using the same cert. That cert must have a Subject Alternative Name (SAN) entry for each nodes FQDN. From a security standpoint, it is not recommended using one cert for multiple servers. Finally, you will need to stop your NiFi nodes, delete your existing users.xml and authorizations.xml files form each of them, and then restart. NiFi will only create those tow files once. Once they have been created, changes to the authorizers.xml file will not trigger updates to them. Thanks, Matt
... View more