Member since
01-07-2019
220
Posts
23
Kudos Received
30
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5004 | 08-19-2021 05:45 AM | |
1808 | 08-04-2021 05:59 AM | |
872 | 07-22-2021 08:09 AM | |
3670 | 07-22-2021 08:01 AM | |
3380 | 07-22-2021 07:32 AM |
01-31-2021
01:50 PM
Unfortunately most sources are in Dutch but for good measure I will explain the most important data points: 1. Total weekly new covid infections come from the RIVM (dutch official body): https://www.rivm.nl/coronavirus-covid-19/archief-corona-updates 2. For the week of 26 jan, it is mentioned in an article by the RIVM that over one third of the current new infections is of the British Variant: https://www.rivm.nl/nieuws/Britse-variant-wint-terrein-in-Nederland 3. The same article mentions the rate was 8.6% in the period of 4-10 jan These are the most important points, and already show the trend. However, here are the additional sources: 4. In 'early december' the rate was about 1% according to this article on the largest news site of the country: https://www.nu.nl/coronavirus/6101869/wat-weten-we-nu-van-de-britse-coronavariant-ja-die-is-echt-besmettelijker.html 5. In a national press conference on 12 Jan, the minister of health indicated that in the past period the rate was between 2-5%. More frequent updates confirmed that there was steady growth in this period, so for the period between 9 and 29 december, there was likely growth from about 2% to about 5%. This could be off by one or two percentage points. 6. Though there was no clear source, several news sites recently referred to a current rate of 20%, this was likely observed between 13 and 19 jan.
... View more
11-01-2020
03:06 AM
The executestreamcommand solution should work, as confirmed on this external website: To quote the most relevant part: ExecuteStreamCommand solved my problem. Create a small script and execute it with parameters from NiFi: #!/bin/bash HOST=$1 USER=$2 PASSWORD=$3 ftp -inv $HOST <<EOF user $USER $PASSWORD cd /sources delete $4 bye EOF
... View more
08-14-2020
06:11 AM
@Gubbi Since you comment that NiFi starts fine when you remove the flow.xml.gz, this points at an issue with loading /reading the flow.xml.gz file. I would suggest opening you flow.xml.gz in an xml editor and make sure it is valid. When NiFi starts it loads the flow.xml in to heap memory. From then on all changes made within the UI are made in memory and a flow.xml.gz is written to disk. I would be looking for XML special characters (http://xml.silmaril.ie/specials.html) that were not escaped in the XML that written out to the flow.xml.gz. Manually remove or correct the invalid flow.xml.gz may work. You may get lucky going back and trying multiple archived copies of flow.xml.gz files. Perhaps one still exist prior to the config change that was made that introduced the issue? Note: Be aware that starting NiFi without the flow.xml.gz will result in loss of all queued FlowFiles. Hope this helps, Matt
... View more
08-13-2020
08:11 AM
@JonnyL I would highly recommend that you back up and create a small 3 node nifi cluster to test this feature. Putting 2 nifi on single node, does not satisfy the test cases you really want to be experimenting with. If this answer resolves your issue or allows you to move forward, please choose to ACCEPT this solution and close this topic. If you have further dialogue on this topic please comment here or feel free to private message me. If you have new questions related to your Use Case please create separate topic and feel free to tag me in your post. Thanks, Steven @ DFHZ
... View more
08-13-2020
06:51 AM
@stevenmatison Thanks .I used QueryRecord ,it helped to get count .
... View more
08-13-2020
04:53 AM
@ang_coder Depending on the number of unique values you need to add, updateAttribute + expression language will allow you to create flowfile attribute based on the table results in a manner I would call "manually". These can be used in routing or further manipulating the content (original database rows) according to your match logic. For example with ReplaceText you can replace the original value with the original value + the new value. Additionally during your flow you can programmatically change the results of the content of the flowfile to add the new column using the attribute from above, or with a fabricated query. In the latter example you would use a RecordReader/RecordWriter/UpdateRecord on your data. In a nutshell you create a translation on the content that includes adding the new field. This is a common use case for nifi and there are many different ways to achieve it. To have a more complete reply that better matches your use case, you should provide more information, sample input data, the expected output data, your flow, a template of your flow, and maybe what you have tried already. If this answer resolves your issue or allows you to move forward, please choose to ACCEPT this solution and close this topic. If you have further dialogue on this topic please comment here or feel free to private message me. If you have new questions related to your Use Case please create separate topic and feel free to tag me in your post. Thanks, Steven @ DFHZ
... View more
08-13-2020
04:44 AM
This message is labeled NiFi, so I assume you have NiFi available? In that case, look at finding the right processor for the job, something like ExecuteSQL may be a good starting point. ---- If your question is purely about how to make python and mariaDB interact, this may not be the best place to ask it.
... View more
08-13-2020
04:23 AM
Only a partial answer but in general I do not think REGEX_REPLACE cuts large strings. It will be hard to figure this out in more detail unless you can share a reproducible example. Here is what i tested just now: 1. Create a table that contains a string of 60000+ characters (lorem ipsum) 2. Create a new table by selecting the regex replace of that string (i replaced every a with b) 3. Counting the length of the field in the new table --- As said, it may well be that you are using a very specific string or regex that together create this problem, it would be interesting to see if this could be reduced to a minimal example. -- Also keep in mind that though they are very similar, there are many ways a regex itself can be parsed, perhaps the test you did is simply slightly different than the implementation in Hive.
... View more
08-13-2020
01:14 AM
After checking the putHDFS processor I did not find the destination timestamp to be a configuration option. However, when checking the configuration options of hadoop fs -put it does show a timestamp option. This suggests there may be a way to achieve what you are looking for. My recommendation would be to log a jira on the Apache Nifi project, and reach out to your account team to see if they can push the priority.
... View more
08-12-2020
02:13 PM
Though I don't know too much about sharepoint, it seems they have an API that allows for HTTP Get requests. Look carefully at the hidden section on this page: https://docs.microsoft.com/en-us/sharepoint/dev/sp-add-ins/get-to-know-the-sharepoint-rest-service?tabs=http Based on the above you could use both the GetHTTP processor or the InvokeHTTP processor with a get method. If this is the wrong way to access sharepoint, my general advice: 1. Figure out how sharepoint allows any program (whether it is nifi, python,...) to extract data. 2. From here it should be comparatively easy to figure out which processor you need to do this
... View more