Member since
05-01-2017
17
Posts
3
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
582 | 09-25-2017 01:50 PM |
02-06-2020
06:57 PM
Hi @justin_brock Believe you were able to fix certificate issue ? I'm have enables ssl for NIFI in CDF but facing "ERR_BAD_SSL_CLIENT_AUTH_CERT" here is link to my question of community https://community.cloudera.com/t5/Support-Questions/Unable-to-open-NIFI-web-UI-after-TLS/m-p/289190#M214098 could you please help me with steps you followed to resolve issue ?
... View more
07-02-2018
05:17 AM
Good afternoon knowledgeable Hortonworks community. Apache Solr 6.4 saw the release of the unified highlighter which according to the Solr documentation is the "most flexible and performant of the options. We recommend that you try this highlighter even though it isn’t the default (yet)". (ref) With this in mind I've attempted to follow this recommendation and utilize it in a project I'm designing. The functionality works as expected but I am unable to find a hl.requireFieldMatch equivalent for this highlighter. This means the entire hl.fl list is returned as empty arrays for all fields that do not have a hit highlight associated with them (along with the successful highlight fields). These fields can be ignored by a client but ideally they would not be passed to a calling client as the list can be quite long especially if a wildcard (*) is used in the hl.fl parameter. With this in mind would I be better off continuing with the unified highlighter and ignoring the additional non-highlighted field list or defaulting back to the fastVector highlighter? How significant performance improvement does the unified highlighter offer and am I better off wearing the additional network data overhead to leverage this performance gain? For reference the index is a large one (400,000,000+ documents on Solr 7.3.1) so my initial instinct is to keep using the unified highlighter and remove as much stress on my Solr cluster as possible. Any advice or recommendations would be greatly appreciated. Thanks DH
... View more
Labels:
11-07-2017
07:25 PM
1 Kudo
Try stopping Nifi and purging everything within your provenance repository then start Nifi. Check nifi-app.log file for any provenance related events. Check if the user running Nifi process has access to read/write in set directory. I had a similar issue today instead my provenance implementation was set to Volatile, which I changed to WriteAhead. Also note, by default implementation is PersistentProvenanceRepository and if you have been changing implementations back and forth you will need to delete provenance data. (WriteAhead can read PersistentProvenanceRepository but not other way around).
... View more
08-23-2017
07:51 PM
D H, You bring up a good point that the EKU restriction should probably be made more evident in the docs. It makes sense -- the certificates are used in both server and client identification roles, so the roles must both be present if either is populated, but many people are unfamiliar with the EKU behavior (or unaware that the certificates serve dual purposes). I hope you are able to resolve your issue quickly.
... View more
11-16-2018
01:06 PM
Article content updated to reflect new provenance implementation recommendation and change in JVM Garbage Collector recommendation.
... View more
05-01-2017
10:33 PM
Thanks Bryan, much appreciated. I wrongly assumed you could add your own properties (in addition to the dynamic ones) on any Processer that allowed you to do so.
... View more