Member since
07-30-2019
3133
Posts
1565
Kudos Received
909
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
158 | 01-09-2025 11:14 AM | |
892 | 01-03-2025 05:59 AM | |
441 | 12-13-2024 10:58 AM | |
497 | 12-05-2024 06:38 AM | |
392 | 11-22-2024 05:50 AM |
02-02-2017
01:01 PM
1 Kudo
@Oliver Fletcher Authentication and authorization are two separate processes within NiFi. There is no way currently for NiFi to pull LDAP groups in to its authorizer. While NiFi's file based local authorizer does support groups, those groups are not mapped to any LDAP groups. With NiFi's latest release authentication via LDAP supports only two "Identity Strategies": Identity Strategy
Strategy
to identify users. Possible values are USE_DN and USE_USERNAME. The
default functionality if this property is missing is USE_DN in order to
retain backward
compatibility. USE_DN will use the full DN of the user entry if
possible. USE_USERNAME will use the username the user logged in with.
So either the DN returned by LDAP (USE_DN) or the username enter on the login screen (USE_USERNAME) is passed to the authorizer post any configured pattern mapping. There are currently is no Strategy for passing the user's LDAP group to the authorizer. NiFi has no support for Ranger groups as you are already aware. However, you could create a set of groups in NiFi's local file based authorizers that each provide a distinct set of access policies. You could then use your script idea to conduct ldap searches and map users DNs or usernames to those specific NiFi groups. You scripts could make calls to the nifi-api to automate adding these users to the those groups. Thanks, Matt
... View more
02-01-2017
10:30 PM
@Arun A K Is there a full stack trace that goes along with that in error in the nifi-app.log?
... View more
02-01-2017
07:32 PM
1 Kudo
@Narasimma varman Make sure the user the NiFi process is running as on your server has the necessary permissions to access that directory path and remove files from it.
Matt
... View more
01-31-2017
07:44 PM
1 Kudo
@bhumi limbu NiFi FlowFile attributes/metadata lives in heap. The List based processors return a complete listing from the target and then creates a FlowFile for each File in that returned listing. The FlowFiles being created are not committed to the list processor's success relationship until all have been created. So you end up running out of NiFi JVM heap memory before that can happen because of the size of your listing. As NiFi stands now, the only option is to use multiple list processors with each producing a listing of on a subset of the total files from your source system. You could use the "Remote Path", "Path Filter Regex" and/or "File Filter Regex" properties in the listSFTP to filter what data is selected to help reduce the heap usage.
You could also increase the available heap to your NiFi's JVM in the bootstrap.conf file; however, I find it unlikely considering the number of FlowFiles you are listing that you will not still run out of heap memory. I logged a Jira in Apache NiFi with a suggested change to how these processors produce FlowFiles from the listing returned by these types of processors: https://issues.apache.org/jira/browse/NIFI-3423 Thanks, Matt
... View more
01-31-2017
02:33 PM
1 Kudo
@Raj B Not all NiFi Processors will write attributes to FlowFiles about failures or errors. The documentation for each processor should include what attributes are written by that processor and what information those attributes will contain. There is no global enforcement by the NiFi controller on what attributes a processor must create. This is completely in the control of the developer who wrote each processor. That being said, it is good practice that any processor that has a "Failure" relationship should output an "Error" level log message that dictates the nature of the failure. This Error log message would contain the specific processor that produced the ERROR as well as information on the specific FlowFile that was routed to failure and the nature of the failure. It is possible to build a dataflow that monitors NiFi's nifi-app.log (TailFile processor) for ERROR log messages, parses out the relevant information and pass that along to some monitoring system. Thanks, Matt
... View more
01-31-2017
01:47 PM
@Joshua Adeleke There is obviously something else going on within your system that is affecting leader election. When you start you NiFi, do you see a leader election/Cluster coordinator count down timer running? Is your NiFi having trouble talking to your Zookeeper? Looks like you are having timeout issues talking to your zookeeper. I still don't understand why you are running your NiFi as a 1 node cluster if all you want is a single standalone instance of NiFi. A NiFi configured as a standalone instance does not need zookeeper and also does not perform election of cluster coordinator or primary node.
Setting the following property in your nifi.properties and restarting will make you NiFi a truly standalone instance: nifi.cluster.is.node=false
Matt
... View more
01-31-2017
01:26 PM
@Anishkumar Valsalam The intent of an "Admin" account in NiFi is to setup users who can do the following: - Access the UI - Setup NiFi controller level Controller Services and Reporting Tasks - Add new users and groups - Set Access policies for those users When it comes to building dataflows on the canvas, that is more of a dataflow managers role. The "Initial Admin Identity" by default does not even get this roles capabilities/accesses, but has the ability through the policies he was granted to grant himself or other users the access needed to build dataflows.
In order to enable the dataflow building icons along the top of the UI, those users will need to be granted the "view the component" and "modify the component" access policies on the specific process group in which the want to build their dataflows. For more information on the various "Access policies" and what capabilities they provide to the assigned users, the NiFi Admin Guide can be found under help within your installed NiFi's UI (Most accurate for whichever version you have installed) or at the following link:
https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#multi-tenant-authorization Thanks, Matt
... View more
01-30-2017
06:06 PM
@Anishkumar Valsalam
Glad to hear you got it setup. The "Access all Policies" access policy willnot work if you have not also granted the users the "access users/user groups" access policy. They need to be able to view users in order to grant them access policies. If this answer was helpful to solving your issue, will you please accept it. Thank you, Matt
... View more
01-30-2017
05:00 PM
1 Kudo
@Anishkumar Valsalam Hello, During initial setup of a secured NiFi installation NiFi allows you to specify a single "initial Admin Identity". Upon first startup, NiFi will use that "Initial Admin Identity" to setup that user and grant them the "Access Policies" needed to administer that NiFi instance/cluster. That identity will be able to log in and add new users and grant "Access Policies" to those users. The default "Access Policies" that are given to that "Initial Admin Identity" include: NiFi File Based Policies: Ranger based Policies: view the UI view the user interface /flow view the controller access the controller (view) /controller (read) modify the controller access the controller (modify) /controller (write) view the users/groups access users/user groups (view) /tenants (read) modify the users/groups access users/user groups (modify) /tenants (write) view policies access all policies (view) /policies (read) modify policies access all policies (modify) /policies (write)
Granting these same "Access Policies" to other users you have added will affectively make them an Admin as well.
Thanks, Matt
... View more
01-30-2017
01:21 PM
4 Kudos
@Saminathan A The PutSQL processor expects that each FlowFile contains a single SQL statement and does not support multiple insert statements as you have tried above. You can have the GetFile Processor route its success relationship twice with each success going to its own ReplaceText processor. Each ReplaceText processor is then configured to create either the "table_a" or "table_b" insert statement. The success from both ReplaceText processors could then be routed to the same PutSQL processor. Thanks, Matt
... View more