Member since
11-18-2016
38
Posts
7
Kudos Received
0
Solutions
09-16-2017
01:26 AM
1 Kudo
Try || instead of CONCAT or +, the former is the standard and the latter are not, according to this.
... View more
08-31-2017
12:33 PM
@regie canada The processor can do that, just set the Keep Source File property to false and the file will be deleted.
... View more
08-17-2017
03:09 AM
It worked! thank you so much. can this query record make aggregation in real time manner? like ill accept data every second then go save it in hbase, but i always should update the data in hbase.
... View more
01-18-2019
05:35 PM
For first error,try to increase the operation system's buffer size sysctl -w net.core.rmem_max=2097152 wmem_max https://access.redhat.com/documentation/en-US/JBoss_Enterprise_Web_Platform/5/html/Administration_And_Configuration_Guide/jgroups-perf-udpbuffer.html
... View more
05-24-2017
12:06 PM
@regie canada The extractText processor creates FlowFile attributes from the extracted text. NiFi has an AttributesToJSON processor you can use to generate JSON form these created attributes. For new questions, please open a new question. It makes it easier for community users to search for answers. Thanks, Matt
... View more
03-08-2017
03:02 AM
Thanks sir @Artem Ervits i'll try this one.
... View more
02-22-2017
02:40 PM
@regie canada You can certainly use Solr and Banana to do Big Data reporting. Some things you should to keep in mind: 1. You need to ensure you are giving enough resources to Solr in terms of memory and CPU for indexing and querying your data. In test environments this isn't a huge issue, but it will certainly be something you take into consideration for production environments. 2. You need to be careful with the dashboards and queries you use with Banana. It's running in a web browser and all of the data that you are manipulating within Banana is loaded into memory. It's relatively easy to create very taxing queries and dashboards that consume alot of memory and put a strain on Solr. Additionally, this can also cause alot of memory usage within the end-user web browser making it unresponsive. The above two points leads me to answering your problem. How much memory have you allocated to the Sandbox? The minimum memory requirement is 8GB, however 10-12GB will work much better. Are most of the components of HDP turned on in the Sandbox? All of these things take up memory and can cause the Solr JVM to run out of memory. My recommendation would be: 1. Stop any unused components in the HDP stack. This will free up some system memory. 2. Allocate more memory to the Sandbox. As I said, 10GB works much better than 8. I prefer to give it 12GB.
... View more
07-09-2018
10:25 AM
@Pierre Villard: "A common approach is something like GenerateTableFetch on the primary node and QueryDatabaseTable on all nodes. The first processor will generate SQL queries to fetch the data by "page" of specified size, and the second will actually get the data. This way, all nodes of your NiFi cluster can be used to get the data from the database.": Will I need to make a (local) RPG after the GenerateTableFetch to get them running in parallel? Any experience on performance for making full RDBMS table dumps using this method vs Sqoop?
... View more
01-17-2017
09:51 AM
@regie canada You can use hive export table command. (https://cwiki.apache.org/confluence/display/Hive/LanguageManual+ImportExport#LanguageManualImportExport-ExportSyntax)
... View more
01-05-2017
03:47 AM
Did you upgrade to Nifi 1.1 Remove old SOAP processor and add latest edition references above? Stop NiFi and make sure no Java processes were running? (Might even want to reboot to clear JVM) Then add new NAR, restart NiFi, create a new flow with new SOAP processor. There are a lot of issues with complex SOAP security and encryption. Can you access this SOAP service with SOAPUI? Regular Java code? If it is complex, you could write your own NIFI custom processor that wraps your specific Java call.
... View more