Member since
07-19-2018
613
Posts
101
Kudos Received
117
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 5093 | 01-11-2021 05:54 AM | |
| 3421 | 01-11-2021 05:52 AM | |
| 8789 | 01-08-2021 05:23 AM | |
| 8383 | 01-04-2021 04:08 AM | |
| 36681 | 12-18-2020 05:42 AM |
01-09-2020
11:07 AM
1 Kudo
Based on your content, the values are accessed as follows: $.HR.finance.name $.HR.finance.age
... View more
01-08-2020
11:22 AM
3 Kudos
Is the port able to be open? If something is blocking port (firewall, iptables, etc), this will cause an issue. Also remove "localhost" in Hostname. Out of the box this should work on all NiFi node(s) with the Hostname box empty (No Value Set).
... View more
01-08-2020
08:41 AM
Anyone wishing to work with these Parquet Readers in a previous version of NiFi should take a look at my post here: https://community.cloudera.com/t5/Support-Questions/Can-I-put-the-NiFi-1-10-Parquet-Record-Reader-in-NiFi-1-9/td-p/286465
... View more
01-08-2020
07:36 AM
Additional working notes on this task: To get the nar files I needed I downloaded NIFI 1.10 and copied the files from /root/nifi-1.10.0/lib to a custom folder on all my nifi nodes (/app/nifi/custom-libs/). I had the following error and need another nar file: While loading 'org.apache.nifi:nifi-parquet-nar:1.10.0' unable to locate exact NAR dependency 'org.apache.nifi:nifi-hadoop-libraries-nar:1.10.0' Once I had both nars I was able to restart and see the following my nifi log (set to DEBUG) /lib/nifi/work/nar/extensions/nifi-parquet-nar-1.9.0.3.4.1.1-4.nar-unpacked]
org.apache.nifi.processors.parquet.FetchParquet
org.apache.nifi:nifi-parquet-nar:1.10.0 || /var/lib/nifi/work/nar/extensions/nifi-parquet-nar-1.10.0.nar-unpacked
org.apache.nifi:nifi-parquet-nar:1.9.0.3.4.1.1-4 || /var/lib/nifi/work/nar/extensions/nifi-parquet-nar-1.9.0.3.4.1.1-4.nar-unpacked
org.apache.nifi.processors.parquet.FetchParquet
org.apache.nifi:nifi-parquet-nar:1.10.0 || /var/lib/nifi/work/nar/extensions/nifi-parquet-nar-1.10.0.nar-unpacked
org.apache.nifi:nifi-parquet-nar:1.9.0.3.4.1.1-4 || /var/lib/nifi/work/nar/extensions/nifi-parquet-nar-1.9.0.3.4.1.1-4.nar-unpacked
org.apache.nifi.processors.parquet.PutParquet
org.apache.nifi:nifi-parquet-nar:1.10.0 || /var/lib/nifi/work/nar/extensions/nifi-parquet-nar-1.10.0.nar-unpacked
org.apache.nifi:nifi-parquet-nar:1.9.0.3.4.1.1-4 || /var/lib/nifi/work/nar/extensions/nifi-parquet-nar-1.9.0.3.4.1.1-4.nar-unpacked
org.apache.nifi.processors.parquet.ConvertAvroToParquet
org.apache.nifi:nifi-parquet-nar:1.10.0 || /var/lib/nifi/work/nar/extensions/nifi-parquet-nar-1.10.0.nar-unpacked
org.apache.nifi:nifi-parquet-nar:1.9.0.3.4.1.1-4 || /var/lib/nifi/work/nar/extensions/nifi-parquet-nar-1.9.0.3.4.1.1-4.nar-unpacked
org.apache.nifi.processors.parquet.PutParquet
org.apache.nifi:nifi-parquet-nar:1.10.0 || /var/lib/nifi/work/nar/extensions/nifi-parquet-nar-1.10.0.nar-unpacked
org.apache.nifi:nifi-parquet-nar:1.9.0.3.4.1.1-4 || /var/lib/nifi/work/nar/extensions/nifi-parquet-nar-1.9.0.3.4.1.1-4.nar-unpacked
org.apache.nifi.processors.parquet.ConvertAvroToParquet
org.apache.nifi:nifi-parquet-nar:1.10.0 || /var/lib/nifi/work/nar/extensions/nifi-parquet-nar-1.10.0.nar-unpacked
org.apache.nifi:nifi-parquet-nar:1.9.0.3.4.1.1-4 || /var/lib/nifi/work/nar/extensions/nifi-parquet-nar-1.9.0.3.4.1.1-4.nar-unpacked
org.apache.nifi.parquet.ParquetRecordSetWriter
org.apache.nifi:nifi-parquet-nar:1.10.0 || /var/lib/nifi/work/nar/extensions/nifi-parquet-nar-1.10.0.nar-unpacked
org.apache.nifi.parquet.ParquetReader
org.apache.nifi:nifi-parquet-nar:1.10.0 || /var/lib/nifi/work/nar/extensions/nifi-parquet-nar-1.10.0.nar-unpacked Anyone working with NiFi and Custom Nars or Other Version Nars will also need to be sure to remove the NIFI "work" directory after all changes. In my HDF cluster it was an rm -rf /var/lib/nifi/work on all 4 nifi nodes. If this folder is not deleted, the restart event will completely break startup or will not be recreated with the new nar files. Now that I am getting NiFi started, I can see the 1.10 in /var/lib/nifi/work/extensions, but I am not seeing any of the 1.10 procs or controller services.... I did another work directory removal, and restarted all nifi again and was able to see all the 1.10 Parquet Procs and the Parquet Reader & Writer Controller Services (what I needed). Once i was working with the Reader/Writer in NiFi ConvertRecord Processor I had the following error which stated that the 1.10 RecordReader was not compatible with the 1.9 RecordReaderFactory. This required more nar files: nifi-standard-services-api-nar-1.10.0.nar nifi-standard-nar-1.10.0.nar. nifi-record-serialization-services-nar-1.10.0.nar Copied to all nodes, deleted work dir, restarted NiFi from Ambari. Looks like I am going to have to use 1.10 versions of everything related to the flow (ConvertRecord & CSVWriter). Thanks @MattWho you are the boss!
... View more
01-08-2020
05:36 AM
It means that there are issues and its not recommended to use it without realizing you may encounter inconsistencies. I read up on this and it appears to only be an issue with some proccessors. I would expect to see a 1.10.1 coming very soon to address the major bugs.
... View more
01-08-2020
04:38 AM
This is a recent change. Based on the amount of feedback it has created just within the Community here, we may expect more changes to the Open Source Model for access to evaluation versions. However, since the Merger, there are many big changes afoot including moving away from mostly Open Source Hortonworks towards new Business Models of Cloudera.
... View more
01-08-2020
04:01 AM
I believe this is a bug in 1.10. Which has an active Jira to be fixed in next version. 1.10 is not ready for “production”... If this reply resolved your issue please mark it as an accepted solution.
... View more
01-06-2020
04:40 PM
@J1TEN Please open a new case/question versus responding to old topic. Also, take a look at the articles section, I just posted how to use Schema Registry API and another example how to do it in NiFi
... View more
01-06-2020
05:57 AM
Congrats, that is a substantial nifi cluster. Before other collaborators get into comment, we still want some more details. @MattWho can probably help better than I can. Let's make sure he has more than enough context. Few more questions: Can you describe your disk arrangement? Can you share your Thread Count settings (reference link in my previous reply)? What are the nifi min/max memory settings? Can you comment on any additional steps you have taken from known NiFI Performance Tuning? https://community.cloudera.com/t5/Community-Articles/HDF-NIFI-Best-practices-for-setting-up-a-high-performance/ta-p/244999
... View more
01-06-2020
05:41 AM
The newest CDP and HDP releases require a customer username and password. Anything in the archive.cloudera.com path will require this level of authorization. Here is some specific reference to how to obtain access (for HDP): Starting with the HDP 3.1.5 release, access to HDP repositories requires authentication. To access the binaries, you must first have the required authentication credentials (username and password).
Authentication credentials for new customers and partners are provided in an email sent from Cloudera to registered support contacts. Existing users can file a non-technical case within the support portal (https://my.cloudera.com) to obtain credentials.
Previously, HDP repositories were located on AWS S3. As of HDP 3.1.5 / Ambari 2.7.5, repositories have been moved to https://archive.cloudera.com
When you obtain your authentication credentials, use them to form the URL where you can access the HDP repository in the HDP archive, as shown below. Insert your username and password at the front of the URL.
... View more