Member since
07-19-2018
613
Posts
101
Kudos Received
117
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 5686 | 01-11-2021 05:54 AM | |
| 3811 | 01-11-2021 05:52 AM | |
| 9485 | 01-08-2021 05:23 AM | |
| 9283 | 01-04-2021 04:08 AM | |
| 38599 | 12-18-2020 05:42 AM |
09-25-2020
12:17 PM
Hi Steven, I used @bingo 's solution to get Nifi to find my JAVA_HOME. But you mention that Nifi does not need this to run. Do you know what is the impact for running nifi without it knowing where Java is installed?
... View more
09-24-2020
08:34 AM
1 Kudo
I solved my problem. In my case one of the talbes' name was starting with a character underscore "_" because of which there was an issue where 2 single quotes were added automatically in the path of the hdfs directory where the copy of the file was stored. I changed the name of the column by removing the underscore character and now i can import the table into Hive database. I think special characters like that are not easily parsed in Hive or HDFS.
... View more
09-24-2020
06:34 AM
1 Kudo
@MKS_AWS There are a few ways to break up JSON within a flowfile (splitJson,QueryRecord). However if its just one giant blob of JSON you may not find that very useful. Perhaps you can share some sample json to that effect. Check out this way to do SNS up to 2gb per payload: https://github.com/awslabs/amazon-sns-java-extended-client-lib If this answer resolves your issue or allows you to move forward, please choose to ACCEPT this solution and close this topic. If you have further dialogue on this topic please comment here or feel free to private message me. If you have new questions related to your Use Case please create separate topic and feel free to tag me in your post. Thanks, Steven
... View more
09-24-2020
05:56 AM
@ravi_sh_DS This gets a bit high level, so forgive me, as I am not sure how you know which ID to change and what to change it too. That said, your approach could be to use QueryRecord and find the match you want, then update that match with UpdateRecord. You can also split the json image array with SplitJson, then use UpdateRecord as suggested above. In either method depending on your Use Case when you split the records and process the splits separately you may need to rejoin them downstream. Some older methods useful here are SplitJson, EvaluateJson, UpdateAttribute, AttributeToJson, but the Query Update Records are now preferred as it is possible to do things more dynamically.
... View more
09-15-2020
09:51 AM
1 Kudo
Actually, both replies can be considered as valid. I confirmed that one, which better fits to my use case.
... View more
09-14-2020
01:18 PM
I suspect you have not completed a step, or missing something. The cacerts works for me in all cases if the cert is publicly trusted (standard public cert from public CA) which it should be. You should share info on the configurations you tried and what if any errors you got from that. The bare minimum settings you need for that are keystore (file location), password, key type (jks), and TLS version. Assuming you copied your java cacert file to all nodes as /nifi/ssl/cacerts the controller service properties should look like: If cacerts doesnt work, then you must create keystores and/or trust stores with the public cert. Use the openssl command to get the cert. That command looks like: openssl s_client -connect https://secure.domain.com You can also get it from the browser when you visit the elk interface; for example cluster health, or indexes. Double click cert lock icon in the browser then use the browser's interface to see/view/download public certificate. You need the .cer or .crt file. Then you use the cert to create the keystore with keytool commands. An example is: keytool -import -trustcacerts -alias ambari -file cert.cer -keystore keystore.jks Once you have created a keystore/truststore file you need to copy it to all nifi nodes, ensure the correct ownership, and make sure all the details are correct in the SSL Context Service. Lastly you may need to modify the TLS type until testing works. Here is working example of getting the cert and using it with keytool from a recent use case: echo -n|openssl s_client -connect https://secure.domain.com | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > publiccert.crt keytool -import -file publiccert.crt -alias astra -keystore keyStore.jks -storepass password -noprompt keytool -import -file publiccert.crt -alias astra -keystore trustStore.jks -storepass password -noprompt mkdir -p /etc/nifi/ssl/ cp *.jks /etc/nifi/ssl chown -R nifi:nifi /etc/nifi/ssl/
... View more
09-13-2020
09:08 AM
thank you for the post but another question - according to the document - https://docs.cloudera.com/HDPDocuments/Ambari-2.7.0.0/administering-ambari/content/amb_changing_host_names.html The last stage is talking about – in case NameNode HA enabled , then need to run the following command on one of the name node hdfs zkfc -formatZK -force thank you for the post but since we have active name node and standby name node we assume that our namenode is HA enable example from our cluster but we want to understand what are the risks when doing the following cli on one of the namenode hdfs zkfc -formatZK -force is the below command is safety to run without risks ?
... View more
09-11-2020
12:24 PM
@Gubbi use this: ListFile -> FetchFile -> ConvertRecord
... View more
09-10-2020
06:39 AM
I think the most straight forward would be to drop the infer schema into your version of NiFi. The procedure is not that hard, you just have to be surgically careful. The process is explained a bit here in reference to adding parquet jars from new version, into older version. Be sure to read all the comments: https://community.cloudera.com/t5/Support-Questions/Can-I-put-the-NiFi-1-10-Parquet-Record-Reader-in-NiFi-1-9/td-p/286465
... View more
09-08-2020
12:26 AM
1 Kudo
It sounds like your testing solution is exceeding the inbound capabilities of the flow tuning (nifi config, processor/queue config) Correct assessment. It has showed that the pipeline was not properly sized for the amount of data, which lead to a back-pressure in the ingest component
... View more