Member since
07-19-2018
613
Posts
101
Kudos Received
117
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4901 | 01-11-2021 05:54 AM | |
3337 | 01-11-2021 05:52 AM | |
8643 | 01-08-2021 05:23 AM | |
8158 | 01-04-2021 04:08 AM | |
36037 | 12-18-2020 05:42 AM |
09-24-2020
05:56 AM
@ravi_sh_DS This gets a bit high level, so forgive me, as I am not sure how you know which ID to change and what to change it too. That said, your approach could be to use QueryRecord and find the match you want, then update that match with UpdateRecord. You can also split the json image array with SplitJson, then use UpdateRecord as suggested above. In either method depending on your Use Case when you split the records and process the splits separately you may need to rejoin them downstream. Some older methods useful here are SplitJson, EvaluateJson, UpdateAttribute, AttributeToJson, but the Query Update Records are now preferred as it is possible to do things more dynamically.
... View more
09-23-2020
04:46 PM
@alex15 I suspect the issue you have is that nifi use is not able to execute the script. Make sure the user ownership of the file is correct, also confirm read/write permissions. In unix/linux these are chown and chmod. If this answer resolves your issue or allows you to move forward, please choose to ACCEPT this solution and close this topic. If you have further dialogue on this topic please comment here or feel free to private message me. If you have new questions related to your Use Case please create separate topic and feel free to tag me in your post. Thanks, Steven
... View more
09-16-2020
09:38 PM
@stevenmatison can you explain about how to add a driver in sqoop command, please
... View more
09-15-2020
09:51 AM
1 Kudo
Actually, both replies can be considered as valid. I confirmed that one, which better fits to my use case.
... View more
09-15-2020
05:53 AM
@lukolas The example you provide sees to be REGEX not expression language. You would need to test some kind of expression language in that Topic Name(s) property. Another suggestion would be to have a file, or for example generateflowfile, which contains a list of topics. Then split/extract that list into attributes, then send that attribute to the topic name. Having a ton of topics going to single processors can become a bottleneck which creates tons of downstream flowfiles from each topic. So be careful with tuning and concurrency in reference to number and topics and messages per topic. If this answer resolves your issue or allows you to move forward, please choose to ACCEPT this solution and close this topic. If you have further dialogue on this topic please comment here or feel free to private message me. If you have new questions related to your Use Case please create separate topic and feel free to tag me in your post. Thanks, Steven @ DFHZ
... View more
09-14-2020
01:18 PM
I suspect you have not completed a step, or missing something. The cacerts works for me in all cases if the cert is publicly trusted (standard public cert from public CA) which it should be. You should share info on the configurations you tried and what if any errors you got from that. The bare minimum settings you need for that are keystore (file location), password, key type (jks), and TLS version. Assuming you copied your java cacert file to all nodes as /nifi/ssl/cacerts the controller service properties should look like: If cacerts doesnt work, then you must create keystores and/or trust stores with the public cert. Use the openssl command to get the cert. That command looks like: openssl s_client -connect https://secure.domain.com You can also get it from the browser when you visit the elk interface; for example cluster health, or indexes. Double click cert lock icon in the browser then use the browser's interface to see/view/download public certificate. You need the .cer or .crt file. Then you use the cert to create the keystore with keytool commands. An example is: keytool -import -trustcacerts -alias ambari -file cert.cer -keystore keystore.jks Once you have created a keystore/truststore file you need to copy it to all nifi nodes, ensure the correct ownership, and make sure all the details are correct in the SSL Context Service. Lastly you may need to modify the TLS type until testing works. Here is working example of getting the cert and using it with keytool from a recent use case: echo -n|openssl s_client -connect https://secure.domain.com | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > publiccert.crt keytool -import -file publiccert.crt -alias astra -keystore keyStore.jks -storepass password -noprompt keytool -import -file publiccert.crt -alias astra -keystore trustStore.jks -storepass password -noprompt mkdir -p /etc/nifi/ssl/ cp *.jks /etc/nifi/ssl chown -R nifi:nifi /etc/nifi/ssl/
... View more
09-13-2020
09:08 AM
thank you for the post but another question - according to the document - https://docs.cloudera.com/HDPDocuments/Ambari-2.7.0.0/administering-ambari/content/amb_changing_host_names.html The last stage is talking about – in case NameNode HA enabled , then need to run the following command on one of the name node hdfs zkfc -formatZK -force thank you for the post but since we have active name node and standby name node we assume that our namenode is HA enable example from our cluster but we want to understand what are the risks when doing the following cli on one of the namenode hdfs zkfc -formatZK -force is the below command is safety to run without risks ?
... View more
09-11-2020
12:24 PM
@Gubbi use this: ListFile -> FetchFile -> ConvertRecord
... View more
09-11-2020
04:45 AM
@Francesco_Fa If you do not want to use embedded sqlite, the options available for this are: Mysql Postgres Oracle https://docs.gethue.com/administrator/administration/database/ If this answer resolves your issue or allows you to move forward, please choose to ACCEPT this solution and close this topic. If you have further dialogue on this topic please comment here or feel free to private message me. If you have new questions related to your Use Case please create separate topic and feel free to tag me in your post. Thanks, Steven @ DFHZ
... View more
09-10-2020
06:39 AM
I think the most straight forward would be to drop the infer schema into your version of NiFi. The procedure is not that hard, you just have to be surgically careful. The process is explained a bit here in reference to adding parquet jars from new version, into older version. Be sure to read all the comments: https://community.cloudera.com/t5/Support-Questions/Can-I-put-the-NiFi-1-10-Parquet-Record-Reader-in-NiFi-1-9/td-p/286465
... View more