Member since
11-01-2016
10
Posts
3
Kudos Received
0
Solutions
03-14-2018
06:52 PM
Thank you @Chad Woodhead
... View more
12-25-2017
04:14 AM
Objective: Connect source (nifi) and destination (nifi) via site-to-site over SSL (https - 9091 port) Issue: NiFi Site to Site SSLHandShakeException PKIX path building failed - Missing TrustStore Certs Root cause: Nifi truststores missing certificates of other Nifi Steps Taken: 1. Run following command to get the list of Certificates in current Nifi TrustStore keytool -v -list -keystore <trustStoreLocation> 2. If no certs that belong to target Nifi are found then we can proceed with installing the target Nifi cert 3. Run following command to get the target Nifi public certificate echo -n|openssl s_client -connect <targetNiFiHostName>:9091 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > /tmp/examplecert.crt 3. Run following command to get the target Nifi public certificate keytool -import -file /tmp/examplecert.crt -alias <targetNiFiCertificate> -keystore <trustStoreLocation> -storepass xxxx -noprompt 3. Verify using Openssl or SSLPoke commands java -Djavax.net.debug=ssl -Djavax.net.ssl.trustStore=<trustStoreLocation> -Djavax.net.ssl.trustStorePassword=xxx SSLPoke <targetNiFiHostName> 9091
openssl s_client -connect <targetNiFiHostName>:9091 4. Verify site-to-site connectivity by re-creating Remote Process Group Note 1. Nifi truststore location is generally different from JDK location. Please update certs in appropriate location so NiFi can pick them 2. Target NiFi needs to provide appropriate permissions for the source NiFi user (DN is based on SSL Cert)
... View more
Labels:
10-03-2017
04:17 PM
@Mohamed Hossam Unfortunately, I don't see any errors/issues in this log file just shows NiFi startup process. if it goes down again, feel free to tail the logs to look for OOM (out of memory caused by your custom processors or AWS not available temporarily)
... View more
09-28-2017
07:52 PM
@Mohamed Hossam Hmm, could we see whats happening in the logs folder (especially nifi-app.log)?
... View more
09-28-2017
12:29 PM
1 Kudo
Background: When installing HDF only, there used to be a default Grafana dashboard for NiFi with the general metrics of NiFi available. When installing HDF on top of an existing HDP cluster (with latest versions of stacks), there is no default dashboard for NiFi even though there are some for Storm and Kafka. This should be fixed for the next version. Problem Summary: Nifi dashboard not visible in Grafana after installing HDF over HDP. Root cause: json file is missing in the grafana directory Resolution: 2 Options 1. Import Json files using grafana dashboard (HDP) or 2. Copy json file as below cp /var/lib/ambari-server/resources/common-services/AMBARI_METRICS/0.1.0/package/files/grafana-dashboards/HDF/grafana-nifi-ho* /var/lib/ambari-server/resources/common-services/AMBARI_METRICS/0.1.0/package/files/grafana-dashboards/HDP/
... View more
09-28-2017
11:27 AM
@Mohamed Hossam Assumptions - Mac OS/Linux OS 1. It can be as simple as you are in the wrong working directory. Do pwd and cd to appropriate folder (that contains bin/nifi.sh). 2. Add nifi bin folder to your path environment variable 3. run nifi in the background using nohup or &. Feel free to let us know in case of any mistakes in my understanding and close this ticket in case of no concerns,
... View more
09-28-2017
11:11 AM
Background: When installing HDF only, there used to be a default Grafana dashboard for NiFi with the general metrics of NiFi available. When installing HDF on top of an existing HDP cluster (with latest versions of stacks), there is no default dashboard for NiFi even though there are some for Storm and Kafka. This should be fixed for the next version. Problem Summary: Nifi dashboard not visible in Grafana after installing HDF over HDP. Root cause: json file is missing the grafana directory Resolution: Copy json file as below cp /var/lib/ambari-server/resources/common-services/AMBARI_METRICS/0.1.0/package/files/grafana-dashboards/HDF/grafana-nifi-ho* /var/lib/ambari-server/resources/common-services/AMBARI_METRICS/0.1.0/package/files/grafana-dashboards/HDP/
... View more
Labels:
06-24-2017
12:45 AM
1 Kudo
Objective: Accessing REST API of Kerberized NiFI Cluster using Bearer Token Prerequisites: CURL or Postman Installed on Your Laptop kinit - Successful for sales1 user Firefox browser with proper config (network.negotiate) Inside Firefox -> Open New Tab & Type -> about:config Filter by network network.negotiate-auth.trusted-uris -> .us-west-2.compute.internal network.negotiate-auth.delegation-uris ->.us-west2.compute.internal SPNEGO NiFi - Successful when you hit NiFi Home Page URL (sales1). Next steps/Plan: On Nifi Home Page, Enable Developer Tools & Monitor Network Logs to get the Bearer Token (under current-user and Authorization part of Request Headers) Lets use the Bearer Token we got to populate the below commands. CURL option: curl ‘https://nifihost:9091/nifi-api/flow/status' -H ‘Authorization: Bearer <Token>’ For example, curl 'https://ip-172-30-0-72.us-west-2.compute.internal:9091/nifi-api/flow/status' -H 'Authorization: Bearer eyJhb..’ --compressed --insecure Postman option: GET https://ip-172-30-0-72.us-west-2.compute.internal:9091/nifi-api/flow/status Authorization No Auth Headers Key Authorization Value Bearer eyJhb… Note: 1. Another way to get the token, Do curl 'https://nifi-host:port/nifi-api/access/token' -H 'Content-Type: application/x-www-form-urlencoded; charset=UTF-8' --data 'username=ldap-username&password=ldap-password' --compressed --insecure to get the token 2.The "Bearer" presented in the rest-api call will be checked against the access policies assigned to that user. Just remember that everything you do via NiFi's UI, are nothing more then calls to nifi-api.
... View more
Labels:
04-02-2017
05:33 PM
Related article - Check Joe Witt`s comments
https://community.hortonworks.com/questions/77336/nifi-best-practices-for-error-handling.html
... View more
03-29-2017
07:12 PM
2 Kudos
Background: In NiFI, the PutSQL Processor is unable to perform the batch inserts although a batch size had been configured in the processor. There is a commit for each insert because the content of each FlowFile is a unique SQL statement. Lets take an example workflow, CSV file -> 50 lines (name, jobtitle) -> 50 flow files For each line, PutSQL processor (batchsize-10) is invoked. Because each flow file is unique in its content, 50 commits are performed. Line1 -> Flowfile#1 Insert into Employee (name, job title) VALUES ('Bryan B','Director') Line2 -> Flowfile#2 Insert into Employee (name, job title) VALUES ('Joe W','CTO') The PutSQL processor batches FlowFiles by matching SQL statements. To match SQL statements with unique inserts, you must configure the dataflow so that the SQL statements make use of the "?" to escape parameters. In this case, the parameters to use must exist as FlowFile attributes with the naming convention sql.args.N.type and sql.args.N.value, where N is a positive integer. The sql.args.N.type is expected to be a number indicating the JDBC Types.
Objective: If we can invoke PutSQL processor with same Insert statement like below, PutSQL processor will use batch size parameter correctly finally resulting in 5 commits (instead of 50). 3 Steps: Use Extract Text Processor to create sql.args.N.value Flow file attributes sql.args.1.value = Bryan B
sql.args.2.value = Director Use Update Attribute Processor to set sql.args.N.type Flow file attributes sql.args.1.type = 12 (VARCHAR)
sql.args.2.type = 12
Finally, use ReplaceText Processor to build the identical insert statements Insert into Employee ("name", "job title") VALUES (?,?)
Final Outcome: By following the above, every single FlowFile will have the exact same insert statement and will be properly batched based on the batch size property in the processor. A single commit will be issued followed by the insert of all batched FlowFiles.
... View more
Labels: