Member since
07-30-2019
3397
Posts
1619
Kudos Received
1001
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 428 | 11-05-2025 11:01 AM | |
| 333 | 11-05-2025 08:01 AM | |
| 468 | 11-04-2025 10:16 AM | |
| 686 | 10-20-2025 06:29 AM | |
| 826 | 10-10-2025 08:03 AM |
12-09-2016
02:10 AM
@pholien feng before a user can access the UI, that user must have the "view the interface" policy granted for them. This policy is added through the global policies UI found under the hamburger menu located in the upper right corner. I see that step is missing in the above answer. Sorry about that. Matt
... View more
12-08-2016
07:05 PM
1 Kudo
@Michael Young HDF NiFi at its core is designed to be very lightweight; however, how powerful a host/node that HDF NiFi needs to be deployed on really depends on the complexity of implemented dataflow and the throughput and data volumes that dataflow will be handling. HDF NiFi may be deployed at the edge, but usually along with those Edge deployments comes a centralized cluster deployment that runs a much more complex dataflow handling data coming from the edge NiFis as well as many other application sources. Thanks, Matt
... View more
12-08-2016
01:26 PM
1 Kudo
@Avijeet Dash Every Node in a NiFi cluster run with their own repositories, flow.xml.gz, and work with their own set of data. Nodes in a cluster are unaware of what data other nodes in the cluster are working on. Once a cluster coordinator is elected all nodes send heartbeats to that node. Nodes cannot share repositories.
When you access the UI via any Node in the cluster, the UI will show the cumulative stats of the entire cluster to the user. The centralized management aspect comes in to play here. Any changes you make within NiFi (no matter which node UI you are logged into) will be replicated to all nodes in the cluster. Thanks, Matt
... View more
12-08-2016
01:16 PM
@pholien feng I need more detail on what you are seeing. There are two parts to accessing a secured NiFi installation, Authentication and authorization. Authentication by default expects users to authenticate using SSL. A user would need to present a valid certificate via their browser to NiFi for authentication. NiFi can also be configured via the login-identity.providers.xml file to support either LDAP or Kerberos for users authentication. After a user successfully authenticates, the authorization piece occurs. The above answer deals with the authorization piece only. Check you nifi-user.log to see if authentication is successful. make sure the DN shown in the nifi-users.log matches exactly (case sensitive and whitespace issues?) what is configured in the "Initial Admin Identity" property in your authorizers.xml file. When nifi is started for the first time after enabling https the users.xml and authorizations.xml files are generated based on the user supplied configurations in the authorizers.xml file. Should the configurations in the authorizers.xml get edited at a later time, those changes will not be made to the existing users.xml or authorizations.xml files. They are only ever created once, subsequent edits to these files are expected to be done via the NiFi application. If you made a mistake in these files when setting up https access for the first time, you can remove these two files and they we be re-created next time you start NiFi. Thanks, Matt
... View more
12-06-2016
02:55 PM
6 Kudos
@kumar The default FlowFile attributes include: entryDate
lineageStartDate
fileSize
filename
path
uuid
The above FlowFile attribute key names are case sensitive. Thanks, Matt
... View more
11-30-2016
06:02 PM
@Simon Engelbert You don't need the listFile to use the FetchFile processor. The FetchFile processor also needs an input file to trigger.
... View more
11-30-2016
03:34 PM
The RPG can be used to redistribute the ingested data of a single node using teh primary node strategy mentind here across every node in your NiFi cluster. This is a great way to distribute the work load while ensuring each node is working a unique set of FlowFiles.
... View more
11-30-2016
03:26 PM
2 Kudos
@Sean Murphy Each Node in a NiFi cluster runs its own threads within its own processor working on its own set of FlowFiles. Nodes in a NiFi cluster have no knowledge of what FlowFiles are being worked on by other nodes. If you are seeing multiple copies of the same output, that suggest that each node in your cluster is processing the same files. I am not sure how your dataflow is designed to ingest the data it works on, but ideally you want to design it in such a way to prevent each node from ingesting the same data/files. Thanks, Matt
... View more
11-28-2016
11:40 PM
1 Kudo
@Mothilal marimuthu The documentation for installing the latest HDF release can be found here: http://docs.hortonworks.com/HDPDocuments/HDF2/HDF-2.0.1/index.html Thanks, Matt
... View more
11-28-2016
07:41 PM
1 Kudo
@Mothilal marimuthu Those processor were not introduced until Apache NiFi 1.0 / HDF 2.0. You screen shot shows you running NiFi 0.3 / HDF 1.1.
... View more