Member since
07-30-2019
3426
Posts
1631
Kudos Received
1010
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 387 | 01-13-2026 11:14 AM | |
| 741 | 01-09-2026 06:58 AM | |
| 776 | 12-17-2025 05:55 AM | |
| 837 | 12-15-2025 01:29 PM | |
| 700 | 12-15-2025 06:50 AM |
04-27-2017
02:06 PM
2 Kudos
@Anishkumar Valsalam In a Nifi cluster you need to make sure you have uploaded you new custom component nars to every node in the cluster. I do not recommend adding your custom nars directly to the existing NiFi lib directory. While this works, it can become annoying to manage when you upgrade NiFi versions. NiFi allows you to specify an additional lib directory where you can place your custom nars. Then if you upgrade, the new version can just get pointed at this existing additional lib dir. Adding additional lib directories to your NiFi is as simple adding an additional property to the nifi.properties file. for example: nifi.nar.library.directory.lib1=/nars-custom/lib1
nifi.nar.library.directory.lib2=/nars-custom/lib2 Note: Each prefix must be unique (i.e. - lib1 and lib2 in the above examples). These lib directories must be accessible by the user running your NiFi instance. Thanks, Matt
... View more
04-27-2017
02:06 PM
1 Kudo
@spdvnz There currently is no HDF release that is based off Apache NiFi 1.1.2. The most current HDF release as of this response if HDF 2.1.2. The HDF 2.1.x versions are all based off Apache NiFi 1.10 plus additional Apache bug fixes. The additional bug fixes included in each of these bug fixes can be found in the release notes for each HDF release. HDF 2.1.0 release notes HDF 2.1.1 release notes
HDF 2.1.2 release notes The documentation for doing an Ambari based upgrade to HDF 2.1.2 can be found here: HDF 2.1.2 Ambari based upgrade guide Thank you, Matt
... View more
04-26-2017
02:32 PM
@Jatin Kheradiya In addition, zookeeper which is used for cluster elections will not work very well using localhost since quorum will not work properly between them. Assume you fix zookeeper to use valid public IP addresses or publicly resolvable hostnames, you still need to make sure node is configured to use a publicly resolvable hostname or ip as well. When a node start it communicates with ZK to see if a cluster coordinator has already been elected or it throws its hat in the mix to become the coordinator himself. Assume localhost becomes elected as the coordinator. all other nodes will be informed of this via ZK and try to send heartbeats directly to "localhost". This will of course fail. Dave is correct that you must avoid using localhost anywhere when installing a cluster. Thanks, Matt
... View more
04-26-2017
02:20 PM
2 Kudos
@Bhagyashree Chandrashekhar NiFi at its core is aligned with any specific data format. NiFi can ingest data of any format (it is just treated as binary) The data is controlled in NiFi via a NiFi FlowFile. A FlowFile consist of Metadata about the actual content and the content itself. It si this FlowFile metadata that is the unit of transfer between processor components in a NiFi dataflow. When it comes to different available processor components, they may care about the content format if their job is to manipulate that content in anyway. I a m not familiar with EBCEDIC data, but form what i have read of EBCEDIC legacy file format is that it can be either EBCEDIC-Binary or EBCEDIC-ASCII. If the format is ASCII, you may be able to use processors like ReplaceText to manipulate the actual content. If it is binary, you may need to write your own custom NiFi processor component that would be able to process that data type. Or you might be able to use a one of NiFi's Scripting processors to use an external script to convert the format of these files to ASCII where you would then be able to modify them with existing processors. There are no EBCEDIC specific processor in NIFi as of now. Thank you, Matt
... View more
04-25-2017
09:22 PM
@Simon Jespersen Try using -vvv on your sftp command outside of NIFi to get more detail on why it is not working:
sftp -vvv -i "IdentityFile=/etc/nifi-resources/keys/<private_key.pem>" -oPort=2222 wftpb086@147.29.151.71 Matt
... View more
04-25-2017
09:22 PM
2 Kudos
@Simon Jespersen
If you cannot get this to work outside of NiFi, it is not going to work inside of NiFi either. But looking over your statement above, I see a couple things... 1. You are trying to use a "ppk" file. This is a Putty Private Key which is not going to be supported by SFTP. You should be using a private key in pem format. 2. SSH is very particular about permissions set on private keys. SSH will reject the key if the permissions are to open. Once you have you pem key make a copy of it for your NiFi application and make sure that copy is owned by the user running NiFi. The permissions also must be 600 on the private key. nifi.root 770 (-rwxrwx---) will not be accepted by SSH
nifi.root 600 (-rw-------) will be accepted. You can't grant groups access to your private key. Thanks, Matt
... View more
04-25-2017
06:36 PM
@Avijeet Dash
Once a File is ingested in to NiFi and becomes a FlowFile, it will remain in NiFi's content repository until all FlowFiles active in your Dataflow that point at that content claim have been satisfied. By satisfied, I mean they have reached a point in your dataflow(s) where those FlowFiles have been auto-terminated. If FlowFile archiving is enabled in your NiFi, the FlowFile content will be moved to an archive directory once no active FlowFiles are pointed at it any longer. The length of time it will be retained in the archive directory is determined by the archive configuration properties in the nifi.properties file. The defaults for archive are enabled with usage set to 12 hours or 50% disk utilization. Thanks, Matt
... View more
04-25-2017
06:24 PM
@Bala Vignesh N V The NiFI admin guide covers installing NiFi on Windows. It is done via the command line using the NIFI tar.gz file. http://docs.hortonworks.com/HDPDocuments/HDF2/HDF-2.1.0/bk_dataflow-administration/content/how-to-install-and-start-nifi.html Thanks, Matt
... View more
04-25-2017
06:17 PM
@Dmitro Vasilenko The ConsumeKafka processor will only accept dynamic properties for Kafka consumers only. max.message.bytes is a server configuration property. I believe what you are really looking for on the consumer side is: max.partition.fetch.bytes This property will be accepted by the consumeKafka processor and you will not see the "Must be a known configuration parameter for this Kafka client" invalid tooltip notification. Thanks, Matt Just as an FYI, I don't get pinged about any new answers/comments you make without the @<username> notation.
... View more
04-25-2017
05:25 PM
@Simon Jespersen Posted answer to above question here: https://community.hortonworks.com/questions/98384/listsftp-failed-to-obtain-connection-to-remote-hos.html
... View more