Member since
01-27-2023
229
Posts
73
Kudos Received
45
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
631 | 02-23-2024 01:14 AM | |
798 | 01-26-2024 01:31 AM | |
566 | 11-22-2023 12:28 AM | |
1259 | 11-22-2023 12:10 AM | |
1469 | 11-06-2023 12:44 AM |
04-18-2023
12:59 AM
hi @ushasri, What do you mean when you say that you want to extract one of the columns? You want to extract it as an Attribute in your FlowFile or? If yes, you can use an ExtractText processor for example, where you add a new property (using the + sign ) and you define your extraction rule, like a Regex. As you are using a csv file, I think the following example suits your use case perfectly: https://community.cloudera.com/t5/Support-Questions/How-to-ExtractText-from-flow-file-using-Nifi-Processor/m-p/190826
... View more
04-18-2023
12:52 AM
hi @nisha2112, Are you certain that your schema is correct? I do not have to much experience with the ConfluentSchemaRegistry, but I think that you might have altered your schema, either when inserting it into the registry or when exporting it out of the registry. What I recommend you to do is: - retrieve the schema (AS-IS) and check it to see if it is correct or not. If not, you know what to do. If it is correct, proceed to next point. - Within your ConvertRecord, modify both your Reader and your Writer to use Schema Text Property, where you manually define your schema. This will tell you one of the following two things: 1) Either your data coming into ConvertRecord is not in a correct format. --> ConvertRecord will fail. 2) You schema gets extracted fault from your ConfluentSchemaRegistry. --> The Flow will work and you will have no error. Once you did the test, you will know where the error is located and you can try debugging it further. For example, you can try and extract your schema from ConfluentSchemaRegistry and see if it gets extracted accordingly. Or if your data is incorrect, you can check if something changed in your source and you modify that data or your schema. There are plenty of possibilities and you have to start from somewhere 🙂
... View more
04-14-2023
07:03 AM
I have also tested the same with Apache Nifi 1.15.3 (open-source/community) and it works. It did not work though when testing with Apache Nifi 1.13.2 (open-source/community) it still fails with the same error message.
... View more
04-12-2023
07:58 AM
Thank you @MattWho, it worked like a charm. You are a life saver 🙂 I did not even consider the nanoseconds and I did not really knew about EL functions for the Java DateTimeFormatter. Neverthless, if somebody else encounters a similar issue, here is the link to the documentation --> here. One more question though, if possible. When saving the data into the postgresql database, using PutDatabaseRecord (JSON as Reader) , the value "2023-04-10 07:43:15.794" gets immediately truncated to "2023-04-10 07:43:15" --> basically it removed everything after the point. In postgresql, the column is defined as "timestamp without time zone" with an precision of 6.
... View more
04-12-2023
07:52 AM
2 Kudos
@drewski7 The removal of quotes from the "command arguments" is expected behavior in the ExecuteStreamCommand processor. This processor was introduced to NiFi more than 10 years ago and was originally designed for a more minimal scope of work including the expectation that FlowFile content would be passed to the script/command being executed. As time passed on the use cases that were trying to be solutioned via the ExecuteStreamCommand expanded; however, handling those use case would potential break users already implemented and working dataflow. So rather then change that default behavior, a new property "Command Arguments Strategy" was added with the original "Command Arguments Property" as the default (legacy method) and a new "Dynamic property arguments" option. This change is part of this JIra and implemented as of Apache NiFi 1.10: https://issues.apache.org/jira/browse/NIFI-3221 In your use case, you'll want to switch to using the "Dynamic property arguments". This will then require you to click on the "+" to add a new dynamic property. The property names MUST use this format: command.argument.<num> So in your case you might try something like: command.argument.1 = -X POST -H referer:${Referer} -H 'Content-Type: application/json' -d '{"newTopics": [{"name":"testing123","numPartitions":3,"replicationFactor":3}], "allTopicNames":["testing123"]}' --negotiate -u : -b /tmp/cookiejar.txt -c /tmp/cookiejar.txt http://SMM-HOSTNAME:8585/api/v1/admin/topics If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped. Thank you, Matt
... View more
04-11-2023
08:20 AM
Hi @Jaimin7, I am not quite sure how your SMTP Server is configured, but everywhere I have implemented the PutEmail processors, I needed 5 mandatory properties: 1) SMTP Hostname 2) SMTP Port 3) SMTP Username (even though in NiFi it is not a mandatory field, it was a mandatory field for the STMP to allow the connection) 4) SMTP Password (even though in NiFi it is not a mandatory field, it was a mandatory field for the STMP to allow the connection) 5) From. Having all these fields configured, I was able to send email from NiFi without any restrictions. Of course, I made sure that the firewall connection between NiFi and the SMTP Server is allowing such connections 🙂
... View more
04-10-2023
08:15 AM
1 Kudo
@udayabaski, as @steven-matison mentioned, the best solution would be to use the Distributed Map Cache. In order to implement it, you can follow these steps to initialize your Server: https://stackoverflow.com/questions/44590296/how-does-one-setup-a-distributed-map-cache-for-nifi/44591909#44591909 Now, regarding the way you want to incorporate it in your flow, I would suggest the following: - right after UpdateAttribute, you activate the PutDistributedMapCache. Within the processor, you will set the desired attribute at the property Cache Entry Identified. - Before InvokeHTTP, you add a FetchDistributedMapCache with which you extract the value for your key:value pair. All you have to do next is to extract your attribute to further use in your invokeHTTP. It is as simple as that and you do not need any fancy configurations 🙂
... View more
04-05-2023
12:15 AM
1 Kudo
Hi @saquibsk, Ok, undestood. What I can tell you is that what you are trying to achieve is not impossible but is not easy either. I believe in the power of a community but in the same time I believe that the scope of the community is to help you with solutions and advice to your problems and not to do the work for you 🙂 I assume that you already started a flow so lets start from there, what you developed, why it is not good and what you are missing from it. From my point of view, there are two options here: 1) You modify all of your Processors to write the Bulletin Level at INFO (or Debug) and afterwards, using an InvokeHTTP, you can access your Bulletin Board with the REST API and extract your information. This it not highly recommended as you will generate very large logs files. Besides that, your certificates must be generated accordingly, otherwise you will get some errors. 2) At each step in your flow, you write a message to LogMessage, which will save your data into nifi-app.log. Here you can define in LogMessage exactly what you want to write. Afterwards, you can create a separate flow, using a TailFile Processor and extract everything you want from your nifi-app.log File. Here you will have to extract only the information you require 🙂 Once you have extracted your data, either from your Bulletin Board or from your LogFile, you can build the SQL Statement for inserting the data into the DB.
... View more
04-04-2023
06:57 AM
Bingo! Thanks so much.
... View more
04-03-2023
12:57 AM
Good day, everyone. This problem has been resolved. I made a new subfolder called /opt/nifi_server/ and installed the NIFI in it. When I first began, it gave me the error "Unable to bind the IP with port 844." I terminated the PID and launched the Nifi.Everything is back to normal now.
... View more