Member since
07-19-2018
613
Posts
100
Kudos Received
117
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3988 | 01-11-2021 05:54 AM | |
2787 | 01-11-2021 05:52 AM | |
7624 | 01-08-2021 05:23 AM | |
7019 | 01-04-2021 04:08 AM | |
31629 | 12-18-2020 05:42 AM |
07-16-2021
04:59 AM
Understood, had to make sure! Next for good measure, make sure flow works for just name. This will show you if the issue is the entire setup or with just "location.city". Next look at the configurations for the Reader/Writer and share those for us to see incase they are not default configs,etc. I believe the particular error: SQLException: Error while preparing statement occurs with a schema conflict or issue with flowfile being different than expected schema.
... View more
07-16-2021
04:48 AM
1 Kudo
I understand. First a NiFi dev recommendation I always suggest: do not route success and failure in the same route. Make them separate. You need to know if the flowfile goes to failure. Also, if you are ignoring certain routes (failure,retry,others) make a habit of routing all them to an output port so you can see where flowfile goes. This concept will help you know where a flowfile when after you push play. One of my dev flows looks like: Once I am satisfied the flow works, and my Success flowfile is on the bottom, i can auto terminate those failures. However, based on your flow, you may for example want to do something different with a failure, like log it, or send an email. Next, I think if you do the above and run your flow, you might see flowfile NOT go to PutCassandraRecord. If it does make it, update the post with the content of the flowfile, and any errors from the PutCassandraRecord. We need to see those errors and what content you are delivering to the processor.
... View more
07-15-2021
07:57 AM
Your sample query has no "quotes" but the one configured does. Just wanted to make sure you tried without those quotes ?
... View more
07-15-2021
07:53 AM
1 Kudo
I am working a lot with NIFI and Cassandra. Please update your post with incoming flowfile format, csv reader configuration, and any errors when you run your flow. These will help myself or others provide more concise reply and hopefully a solution.
... View more
01-19-2021
04:30 AM
@singyik Yes. I believe that is the last free public repo. Who knows how long it will remain. If you are using it i would recommend to fully copy and use the copy.
... View more
01-13-2021
04:42 AM
@dzbeda In a previous lifetime I accomplished getting windows log data and windows metrics using Elastic Beats. There is one winlogbeat which is great. Even using regular file beats you can make custom listener. This leverages the ELK stack, (elasticsearch, logstash, kibana, beats), but is an interesting look, and connecting in NiFi through the elk indexes on that log data. The other method i have used is Minifi, as suggested to @ashinde, but this is a technical challenge with some difficult hurdles to get a data flow working in windows and wired up to Nifi. If you take this route I would challenge you to create an article here in the community to share your solution. If this answer resolves your issue or allows you to move forward, please choose to ACCEPT this solution and close this topic. If you have further dialogue on this topic please comment here or feel free to private message me. If you have new questions related to your Use Case please create separate topic and feel free to tag me in your post. Thanks, Steven
... View more
01-11-2021
05:54 AM
You must have the reader incorrectly configured for your CSV schema.
... View more
01-11-2021
05:52 AM
2 Kudos
@Lallagreta You should be able to define the filename, or change the filename to what you want. That said the filename doesnt dictate the type, so you can have parquet saved as .txt. One recommendation I have is to use parquet command line tools during the testing of your use case. This is the best way to validate that files are looking right, have the right schema, and right results. https://pypi.org/project/parquet-tools/ I apologize i do not have any exact samples, but from my recall of a year ago, you should be able to get simple commands to check schema of a file, and another command to show the data results. You may have to copy your hdfs file to local file system to inspect them from command line. If this answer resolves your issue or allows you to move forward, please choose to ACCEPT this solution and close this topic. If you have further dialogue on this topic please comment here or feel free to private message me. If you have new questions related to your Use Case please create separate topic and feel free to tag me in your post. Thanks, Steven
... View more
01-08-2021
09:54 AM
1 Kudo
@Lallagreta The solution you are looking for is to leverage NiFi Parquet Processors w/ Parquet Record Reader/Writer Some fun links: https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-parquet-nar/1.11.4/org.apache.nifi.parquet.ParquetRecordSetWriter/index.html https://community.cloudera.com/t5/Community-Articles/Apache-NiFi-1-10-Support-for-Parquet-RecordReader/ta-p/282390 The Parquet procs are part of Nifi1.10 and up, but you can also install the nars into any older nifi versions: https://community.cloudera.com/t5/Support-Questions/Can-I-put-the-NiFi-1-10-Parquet-Record-Reader-in-NiFi-1-9/m-p/286465 If this answer resolves your issue or allows you to move forward, please choose to ACCEPT this solution and close this topic. If you have further dialogue on this topic please comment here or feel free to private message me. If you have new questions related to your Use Case please create separate topic and feel free to tag me in your post. Thanks, Steven
... View more
01-08-2021
05:23 AM
2 Kudos
@murali2425 The solution you are looking for is QueryRecord configured with a CSV Record Reader and Record Writer. You also have UpdateRecord and ConvertRecord which can use the Readers/Writers. This method is preferred over splitting the file and adds some nice functionality. This method allows you to provide a schema for both the inbound csv (reader) and the downstream csv (writer). Using QueryRecord you should be able to split the file, and set attribute of filename set to column1. At the end of the flow you should be able to leverage that filename attribute to resave the new file. You can find some specific examples and configuration screen shots here: https://community.cloudera.com/t5/Community-Articles/Running-SQL-on-FlowFiles-using-QueryRecord-Processor-Apache/ta-p/246671 If this answer resolves your issue or allows you to move forward, please choose to ACCEPT this solution and close this topic. If you have further dialogue on this topic please comment here or feel free to private message me. If you have new questions related to your Use Case please create separate topic and feel free to tag me in your post. Thanks, Steven
... View more