Member since
07-30-2019
3390
Posts
1617
Kudos Received
999
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 219 | 11-05-2025 11:01 AM | |
| 432 | 10-20-2025 06:29 AM | |
| 572 | 10-10-2025 08:03 AM | |
| 391 | 10-08-2025 10:52 AM | |
| 432 | 10-08-2025 10:36 AM |
04-28-2017
06:02 PM
1 Kudo
@Gu Gur You can right click on any NiFi processor and select "usage" to get the complete documentation on that processor. Any processor documentation that states it supports "Dynamic Properties" will allow you to add additional properties in the documented format. To add an additional property simply click on the following icon on the properties tab: Thanks, Matt
... View more
04-28-2017
05:57 PM
1 Kudo
@Raphaël MARY A secured NiFi does not force its processor components to be secured. A secured Nifi can communicate with non secure end-points. Security at the processor component level is handled via configuration on the processor itself and never at the NiFi core level. That being said, I am really not sure what your twitter connection issue is. If the GetTwitter processor did support TLS, I would expect it to have a processor for pointing at a "SSL Context Service" controller service which it does not. It looks like authorization is purely handled via: You could try putting the twitter processor in debug and see if it produces any additional output in the nifi-app.log that may lead to a cause. Thanks, Matt
... View more
04-28-2017
11:45 AM
@Sanaz Janbakhsh Once you have successfully validated your HDF upgrade, you can get rid of your backup. The backup of the Ambari resources was not always part of the procedure. It was added mainly because we found users were running a HDF upgrade on their HDP Ambari installs. By doing so they ended up messing up their HDP installs. HDF and HDP are two different product lines and the HDF mpack does not add to existing services, it makes available only those services in the HDF stack. So if you update the mpack on a HDP Ambari, you can really mess up your HDP cluster. So we added the steps to make a backup first. So after upgrade and verifying that all services are accessible and running without error there is probably no reason to keep the backup much longer then that. Thanks, Matt
... View more
04-27-2017
02:06 PM
1 Kudo
@Sebastian Carroll The intent of the automated SSL setup is to help users who do not have an external CA quickly build and deploy (in the case of Ambari) a valid keystore and truststore to their NiFi instance(s). Ambari simply uses the tls-toolkit as well but with some pre-defined parameters to automated the creation of the keystores and truststore for your Ambari deployed NiFi cluster. It is really not recommended to use the NiFi CA in Ambari in a production environment. Users encouraged to use a legitimate CA to sign certificates in a production. The reason for this is because their is no inherent trust of any certs signed by the NiFi CA and every install of a HDF NiFi cluster will have its own CA. So using being able to use things like NiFi's S2S protocol between systems deployed using different NiFi CA adds a lot of additional work/overhead since you must constantly update the CAs in every systems truststore. If I am understanding you correctly, you are asking for a way to tell Ambari to generate certificates and pass them to an external company CA to get them signed? Since Ambari has no control over an external CA and the credentials needed to sign certificate requests should be highly protected, I don't see a way to securely automate this entire process. The best that could be done is to Automate the creation of the self-signed certificate and certificate signing request. The user would still need to send that request to their company CA to be signed and the import the signed response once received back in to your keystore. Users would also still need to manually obtain the public key for their company CA in order to create or add it to a truststore. The problem with having Ambari auto-generate a certificate is that many companies have specific requirements for what specifically must be defined in servers certificate. Having Ambari provide all possible options sounds like overkill. I don't see why you could not use the NiFi tls-toolkit to generate a certificate that you could then get signed by your own CA. Again, I don't really see how NiFi could automate beyond creating the cert and signing request. If I am missing something here, please let me know. In Ambari based installs you do not need to use the NiFi CA to create certificates. Simply make sure the NiFi certificate authority is not installed. Then in the NiFi configs within NiFi, configure the SSL settings to point at the PKCS12 or JKS keystore and truststore you manually obtained via your company. The configs by default in Ambari expect that every node is using a keystore named the same (content of keystore shoudl be unique on each node)and that the keystores all use the same password. Thank you, Matt
... View more
04-27-2017
02:06 PM
2 Kudos
@Anishkumar Valsalam In a Nifi cluster you need to make sure you have uploaded you new custom component nars to every node in the cluster. I do not recommend adding your custom nars directly to the existing NiFi lib directory. While this works, it can become annoying to manage when you upgrade NiFi versions. NiFi allows you to specify an additional lib directory where you can place your custom nars. Then if you upgrade, the new version can just get pointed at this existing additional lib dir. Adding additional lib directories to your NiFi is as simple adding an additional property to the nifi.properties file. for example: nifi.nar.library.directory.lib1=/nars-custom/lib1
nifi.nar.library.directory.lib2=/nars-custom/lib2 Note: Each prefix must be unique (i.e. - lib1 and lib2 in the above examples). These lib directories must be accessible by the user running your NiFi instance. Thanks, Matt
... View more
04-27-2017
02:06 PM
1 Kudo
@spdvnz There currently is no HDF release that is based off Apache NiFi 1.1.2. The most current HDF release as of this response if HDF 2.1.2. The HDF 2.1.x versions are all based off Apache NiFi 1.10 plus additional Apache bug fixes. The additional bug fixes included in each of these bug fixes can be found in the release notes for each HDF release. HDF 2.1.0 release notes HDF 2.1.1 release notes
HDF 2.1.2 release notes The documentation for doing an Ambari based upgrade to HDF 2.1.2 can be found here: HDF 2.1.2 Ambari based upgrade guide Thank you, Matt
... View more
04-26-2017
02:32 PM
@Jatin Kheradiya In addition, zookeeper which is used for cluster elections will not work very well using localhost since quorum will not work properly between them. Assume you fix zookeeper to use valid public IP addresses or publicly resolvable hostnames, you still need to make sure node is configured to use a publicly resolvable hostname or ip as well. When a node start it communicates with ZK to see if a cluster coordinator has already been elected or it throws its hat in the mix to become the coordinator himself. Assume localhost becomes elected as the coordinator. all other nodes will be informed of this via ZK and try to send heartbeats directly to "localhost". This will of course fail. Dave is correct that you must avoid using localhost anywhere when installing a cluster. Thanks, Matt
... View more
04-26-2017
02:20 PM
2 Kudos
@Bhagyashree Chandrashekhar NiFi at its core is aligned with any specific data format. NiFi can ingest data of any format (it is just treated as binary) The data is controlled in NiFi via a NiFi FlowFile. A FlowFile consist of Metadata about the actual content and the content itself. It si this FlowFile metadata that is the unit of transfer between processor components in a NiFi dataflow. When it comes to different available processor components, they may care about the content format if their job is to manipulate that content in anyway. I a m not familiar with EBCEDIC data, but form what i have read of EBCEDIC legacy file format is that it can be either EBCEDIC-Binary or EBCEDIC-ASCII. If the format is ASCII, you may be able to use processors like ReplaceText to manipulate the actual content. If it is binary, you may need to write your own custom NiFi processor component that would be able to process that data type. Or you might be able to use a one of NiFi's Scripting processors to use an external script to convert the format of these files to ASCII where you would then be able to modify them with existing processors. There are no EBCEDIC specific processor in NIFi as of now. Thank you, Matt
... View more
04-25-2017
09:22 PM
@Simon Jespersen Try using -vvv on your sftp command outside of NIFi to get more detail on why it is not working:
sftp -vvv -i "IdentityFile=/etc/nifi-resources/keys/<private_key.pem>" -oPort=2222 wftpb086@147.29.151.71 Matt
... View more
04-25-2017
09:22 PM
2 Kudos
@Simon Jespersen
If you cannot get this to work outside of NiFi, it is not going to work inside of NiFi either. But looking over your statement above, I see a couple things... 1. You are trying to use a "ppk" file. This is a Putty Private Key which is not going to be supported by SFTP. You should be using a private key in pem format. 2. SSH is very particular about permissions set on private keys. SSH will reject the key if the permissions are to open. Once you have you pem key make a copy of it for your NiFi application and make sure that copy is owned by the user running NiFi. The permissions also must be 600 on the private key. nifi.root 770 (-rwxrwx---) will not be accepted by SSH
nifi.root 600 (-rw-------) will be accepted. You can't grant groups access to your private key. Thanks, Matt
... View more