Member since
07-30-2019
920
Posts
195
Kudos Received
91
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1035 | 10-05-2021 01:53 PM | |
14445 | 09-23-2019 06:03 AM | |
5595 | 05-04-2019 08:42 PM | |
1139 | 06-11-2018 12:45 PM | |
10421 | 06-04-2018 01:11 PM |
05-03-2017
12:42 PM
1 Kudo
@Ravi Teja The flowfile isn't going out the FAILED connection because the session is rolled back to the incoming queue. I have highlighted that part of the log with the roll back information. Meaning the flowfile is penalized and then put back on the incoming queue for the PutSQL processor. If you want to write out the file that is causing the roll back, with a PutFile processor, there are ways to pull that flowfile out of the flow. You can use an RouteOnAttribute and put the uuid of the flowfile in as a property to get the flowfile and then route the flowfile out to the PutFile processor. For provenance events, there is a slight delay in the writing of events based on the configuration of NiFi. This is the property that controls how quickly the provenance events are available in the UI: nifi.provenance.repository.rollover.time -- The amount of time to wait before rolling over the latest data provenance information so that it is available in the User Interface. The default value is 30 secs. To see events for a particular processor, right click on the processor and a menu will pop up, select Data provenance Then another window will open displaying the provenance events for just that processor
... View more
05-30-2018
08:06 AM
Hi @Sherif Eldeeb I got it. By example https://community.hortonworks.com/questions/118237/how-to-use-apache-nifi-evaluatejsonpath-for-json-t.html Thank @Matt Burgess
... View more
03-08-2017
02:16 PM
Good to hear.
... View more
05-05-2017
10:00 PM
@Pradhuman Gupta You cannot setup logging for a specific processor. But you can setup a new logger for a specific processor class. First you would create a new appender in the nifi logback.xml file: <appender name="PROCESSOR_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${org.apache.nifi.bootstrap.config.log.dir}/nifi-processsor.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<!--
For daily rollover, use 'user_%d.log'.
For hourly rollover, use 'user_%d{yyyy-MM-dd_HH}.log'.
To GZIP rolled files, replace '.log' with '.log.gz'.
To ZIP rolled files, replace '.log' with '.log.zip'.
-->
<fileNamePattern>${org.apache.nifi.bootstrap.config.log.dir}/nifi-processor_%d.log</fileNamePattern>
<!-- keep 5 log files worth of history -->
<maxHistory>5</maxHistory>
</rollingPolicy>
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<pattern>%date %level [%thread] %logger{120} %msg%n</pattern>
<immediateFlush>true</immediateFlush>
</encoder>
</appender>
Then you create a new logger that will write to that appender log file: <logger name="org.apache.nifi.processors.attributes.UpdateAttribute" level="WARN" additivity="false">
<appender-ref ref="PROCESSOR_FILE"/>
</logger> In the above example i am creating a logger for the UpdateAttribute processor. Now any WARN or ERROR log messages produced by this specific processor will be written to this new log. You can expand upon this flow by configuring loggers for each Processor class you want to monitor and send them to the same appender. Then use a SplitText processor to split the content of the FlowFile produced by the TailFile. then use Route On Content processor to route specific log lines produced by each processor class to a different put email or simply create a different message body attribute for each. Thanks, Matt
... View more
02-16-2017
05:11 PM
1 Kudo
@Anshuman Ghosh The Search Value Regex above has 4 capture groups from a valid IP address. Each capture group can then be referenced in the replacement Value as $1, $2, $3, and/or $4. In the example above the replacement for each found valid IP is still the first two numbers followed by ".x.x". You can of course change the replacement value to whatever meets your specific needs. Thanks, Matt
... View more
02-02-2017
04:34 PM
10 Kudos
Assumptions The following assumptions are made. The openldap clients and server are already installed. The basic setup of the ldap server has been completed and users "nifi admin", "nifi user1" and "nifi user2" are in the ldap database. LDAPS System Configuration The example below is being configured on system
nifi-sme-20. The CA
certificate being used, aka truststore, is called, all-trusted.jks and the server
certificate, aka keystore, is called nifi-sme-20.cert.pfx. There is also the encrypted private key for
the server, nifi-sme-20.key.enc.pem which is needed for the configuration of the ldaps service. List the current
certificates in the database, the default location is /etc/openldap/certs, using the following command: certutil -d /etc/openldap/certs/ -L Output will look like the following If your CA is in pem format, then it can be imported into the NSS database. If you have a CA that is in jks format, it first must be converted before it can be imported. Converting it can be done in two steps: keytool -importkeystore -srckeystore all-trusted.jks -destkeystore all-trusted.p12 -deststoretype PKCS12
openssl pkcs12 -in all-trusted.p12 -out all-trusted.pem Now the truststore can be imported into the database certutil -d /etc/openldap/certs/ -A -n "CAcert" -t CT,, -a -i /opt/configuration-resources/certs/all-trusted.pem This command adds a CA certificate stored in the PEM (ASCII) formatted
file named /opt/configuration-resources/certs/all-trusted.pem, the -t CT,, means that the certificate is
trusted to be a CA issuing certs for use in TLS clients and servers. To verify the CA has been imported use the certutil command from above to list Now import the server certificate in the database. certutil -d /etc/openldap/certs/ -A -n "nifi-sme-20" -t u,u,u -a -i /opt/configuration-resources/certs/nifi-sme-20-cert.cer This command adds the server certificate, the -t u,u,u, means the certificate can be used for authentication or signing. Now list the contents in the database, you see the following Next update the slapd service to use the CA and server certificate. This is done by updating the /etc/openldap/slapd.d/cn=config.ldif file. The file cannot be edited manually, you have to update the file by using the ldapmodify command. One way to do this is create your own file.ldif file with the updates needed and then use this file as a parameter on the ldapmodify command. For this article, I created a file called tls-enable.ldif, here is a copy of the file: TLSCertificateFile this directive specifies the file that contains the slapd server certificate TLSCACertificateFile this directive specifies the PEM-format file containing certificates for the CA's that slapd will trust TLSCertificateKeyFile this directive specifies the file that contains the private key that matches the certificate stored in the TLSCertificateFile file. Note: To use the private key, we need to decrypt it, this can done with the following command: openssl rsa -in nifi-sme-20.key.enc.pem -out nifi-sme-20.key.pem The command used to update the cn=config.ldif is: ldapmodify -Y EXTERNAL -H ldapi:/// -f tls-enable.ldif Now restart the slapd service systemctl status slapd Now verify that you are able to connect to the slapd service, run the following command: openssl s_client -connect nifi-sme-20:636 -debug -state -CAfile /opt/configuration-resources/certs/all-trusted.pem If the commands works, output similar to this is displayed This will put you into a shell , which you can use control-c to exit. In addition, if you check the status of the slapd service, you will also see the connection from the above command LDAPS NiFi Configuration Now that you have successfully configured the slapd service,
there are a few steps to setup NiFi to use LDAPS. First, configure NiFi to perform user
authentication over HTTPS, the following sections in the nifi.properties file section
need to be completed. Again for this example, the configuration is being done on system nifi-sme-20. Make sure to set the web section to use https host and port In addition, fill in the security section with the keystore and truststore. In this example, I use the same CA certificate in the nifi.properties as the ldaps service, but it isn't a requirement for it to work with NiFi. The CA used in the configuration of the login-identity-provider.xml has to be the same as the one used in the configuration of the ldaps service. Notice also that the nifi.security.user.login.identity.provider is set to ldap-provider. Now edit the login-identity-provider.xml file and add the keystore, truststore and all of the other TLS properties. Once you set the authentication strategy to LDAPS, all of the other properties are required to have some value. Inside the file is short explanation of each property and the possible values. If this is the first time to secure the NiFi instance, the last step is to set the initial admin identity in the authorizers.xml file. Now restart/start NiFi. This is what you will see when you go to the NiFi UI in the browser: And there you go, you have successfully configured NiFi to use LDAPS.
... View more
Labels:
07-19-2017
07:54 PM
Thanks @Wynner!
... View more
04-18-2017
03:49 PM
1 Kudo
@Ed Prout One way around the issue, is to monitor the success relationship out of the GetSplunk processor using the MonitorActivity processor. If data does not pass through in a set time period, the the MonitorActivity processor generates an "inactive" flow file and this can be used as a trigger for an ExecuteScript processor which would run a curl script to restart the processor. Not an elegant solution, but it should work.
... View more
07-19-2016
06:13 PM
2 Kudos
@Brad Surdick GetFile does not stream the file as it is being written. If you do not configure the GetFile processor correctly, it will pull the incomplete file multiple times. To prevent this from happening, configure the GetFile processor property Minimum File Age to a value, say 30 seconds. The minimum age that a file must be in order to be pulled; any file
younger than this amount of time (according to last modification date)
will be ignored.
... View more
- « Previous
- Next »