Member since
07-30-2019
3131
Posts
1564
Kudos Received
909
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
128 | 01-09-2025 11:14 AM | |
773 | 01-03-2025 05:59 AM | |
413 | 12-13-2024 10:58 AM | |
444 | 12-05-2024 06:38 AM | |
367 | 11-22-2024 05:50 AM |
03-09-2017
02:36 AM
3 Kudos
@Saikrishna Tarapareddy Using the ExecuteScript processor here should work for you. The "Command" property should only contain "kinit" The "Command Arguments" property is where you would add "-k -t /etc/security/keytabs/nifi.keytab nifi/695660.x.com@X.X.COM" Two things to keep in mind: 1. Make sure the user that runs/owns your NiFi process can also resolve and execute the kinit command 2. Make sure the user that runs/owns your NiFI process has the necessary permissions to navigate down the path to your nifi.keytab and read that file. (The error seems to indicate that your NiFi user can get down that path.) Thanks, Matt
... View more
03-08-2017
03:52 PM
@Harshith Venkatesh When performing Site-to-Site (S2S) between two Secured NiFi installs, server authentication and authorization will need to be successful. In your case it sounds like authentication was likely successful (You can confirm this by looking in the nifi-user.log of the target NiFi). What appears to be missing is source server(s) authorization. To resolve the "forbidden" you are seeing on your RPG, you will need to go to the target NiFi and add a new user for the source NiFi server(s) running the RPG. Click on "Users" to add new The user you are adding will need to be the full DN from the source NiFI's server certificate. (Case sensitive and white spaces count as valid characters). You can pull he DN out of the nifi-user.log or by doing a verbose listing source NiFi's keystore. After you have added the server as a user, you will need to authorize that server by clicking on "Policies" and granting the server "retrieve site-to-site details" access policy.
After doing the above the "forbidden" response on the RPG should go away on next sync. What you still will not see is a list of available input and output ports on the target NiFi to which your source NiFi can connect with over S2S. Remote input and output ports can only be added to the root canvas level. After they have been added you will need to allow your source NiFi server user to access them as well before they will show up in the RPG. This is done via the "Operate panel": Selecting an input or output port on the canvas will show that component as the selected component in the operate panel. Select the key icon and grant your NiFi source Server the following policy: For input ports --> "receive data via site-to-site" access policy For output ports --> "send data via site-to-site" access policy On next sync RPG should now show these ports as available to your source Nifi for connecting to over S2S. Thanks, Matt
... View more
03-08-2017
02:43 PM
@mel mendoza There appears to be some issue with your FTP server. The SYST command is a standard command that is passed when establishing a connection to the FTP server. The expected response would be the OS information of the target FTP server. https://en.wikipedia.org/wiki/List_of_FTP_commands The FTP server running on your SUN OS 4.1 is responding that it does not understand that command. Thanks, Matt
... View more
03-07-2017
08:00 PM
You need to keep the NIFi copy of the data even after writing a copy of it out via the putSFTP? If you need to retain a local copy of the data, route success twice form your putSFTP processor. So you should be able to do this with..... The UpdateAttribute processor can be used to update the filename by adding the following new property to its configuration: The Local copy of your Files remain unchanged down the success relationship to the left. The copy sent down the path to the right will have its content cleared, filename changed, and then sent via another PutSFTP. Thanks, Matt
... View more
03-07-2017
05:47 PM
@Anishkumar Valsalam No harm at all having one node service both roles. It is very common to see that. Matt
... View more
03-07-2017
05:45 PM
@vikash kumar You current flow is as follows? getSFTP ---->(success)----> putSFTP ---->(success)----> ???
After you put copies of your FlowFiles content to your target SFTP server using putSFTP, do you have any need for the content any longer? If not you could simply use the ReplaceText processor configured as follows: "Always replace" will replace the entire content with the configured "replacement value". If blank you will end up with a 0 byte file for every FlowFile that was successfully written to your putSFTP server. Is this what you are looking for? Thanks,
Matt
... View more
03-07-2017
05:38 PM
1 Kudo
@Sunile Manjee UUIDs are created when a template is add to the canvas. This allows users to instantiate the same template multiple times within a single NiFi. It also insures not conflict of UUID with other existing components already instantiated. There is no way for users to set the UUIDs of any component manually. The only way to maintain UUIDs between NiFi instances is to move the entire flow.xml.gz file from one NIFi to the next rather then using templates. Thanks, Matt
... View more
03-07-2017
05:34 PM
1 Kudo
@Sunile Manjee Even if you were to use NiFi's file based authorizer instead of Ranger the same limitation exists with maintaining authorizations when moving templates from one NiFi environment to another. Templates were never intended or designed to be the answer to the SDLC. Although they represent the closest thing to it for now. Templates are nothing more the a snippet of a NiFi components that can be reused within the same NiFi or downloaded and shared with user of other NiFi instances. They cannot be hardcoded to use specific component uuids nor would be want to because that what hinder there reusability within the same NiFi instance. We also can't include any authorizations with a template since there is no way of knowing that other NiFi instances in which the template is loaded will contain the same set of users. Nor can we set authorizations based on PG names. What if another PG is created with that same name in another process group? What is a user happens to use a PG name that has policies associated to it? The results could present a security issue. There is on going work towards a better SDLC model with NIFi. That being said, the default behavior when adding a template to a graph is that all components inherit the policies from the parent process group. So if at the root level you create several process groups with a specific set of authorizations for each, instantiating your templates in a given process group will establish a controlled set of authorizations. Not the ideal solution, but helps some until future work is done to make SDLC better. Thanks, Matt
... View more
03-07-2017
04:35 PM
Role change does not cause data loss. Every node in a cluster runs the same dataflow and works on its own set of FlowFiles. Processor components added to the canvas and configured to run on primary node only will run on the currently elected primary node. So when primary node assignment changes the primary node only configured processors are stopped on the old primary node and started on the new. So be mindful of what processors are set to run on primary node only. While this will not result in data loss, it could result in data being stalled in a dataflow.
... View more
03-07-2017
04:31 PM
2 Kudos
@Anishkumar Valsalam All Nodes will register with ZK to become the cluster coordinator when NiFi cluster is first started. Once all nodes have checked in to ZK or 5 minutes has passed a random node from those who connected will be picked as the cluster coordinator. ZK will also register one node as your primary node. Once a cluster coordinator has been elected, all nodes will start sending heartbeats directly to that node. The cluster coordinator assumes the role of disconnecting nodes from the cluster who do not send heartbeats and reconnecting nodes who heartbeat later after previously being disconnected. Nodes in a cluster also heartbeat with ZK. If either the primary node or cluster coordinator fails to heartbeat, another connected node(s) at random is elected to assume those roles. There is no ability for users to manually assign either of these roles to a specific node in a cluster. Thanks, Matt
... View more