Member since
06-08-2017
1049
Posts
518
Kudos Received
312
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 11985 | 04-15-2020 05:01 PM | |
| 7950 | 10-15-2019 08:12 PM | |
| 3593 | 10-12-2019 08:29 PM | |
| 12972 | 09-21-2019 10:04 AM | |
| 4840 | 09-19-2019 07:11 AM |
10-13-2022
12:28 PM
You can use EvaluateJsonPath. You only have to add an extra attribute and name it as you want (e.g. count) and set as value "$.my.array.size.length()". Then select Destination and change it to flowfile-attribute. This processor will produce a flowfile which has your extra attribute
... View more
10-15-2021
06:07 AM
Followed your steps but getting error on ConvertRecord processor /partition_dt is invalid because 'partition_dt' is not an associated property or has no validator associated with it. How can I resolve?
... View more
02-12-2021
12:10 AM
This solution will not work. The failure flow does not go to next putDatabaseRecord processor because the error is exception. Failure flow files will go nowhere
... View more
09-30-2020
12:47 PM
Hi @calonsca! Please have a look at this spec as well! [ { "operation": "shift", "spec": { "@": "data", "ID": "&", "#${date}": "date", "#${dataset:toLower()}": "dataset" } } ]
... View more
05-29-2020
08:11 AM
@Shu_ashu this approach works has a problem that clear-state is working only on stopped processor. I am using ScrollElasticsearch processor and it needs to be cleared before it can be executed again. I tried curl -i -X POST http://localhost:8080/nifi-api/processors/0172101b-be82-11aa-1249-d1383cb1ceba/state/clear-requests but it end-up with conflict status I must stop processor in order to clear-state Do I really have to stop processor? manually or via API - it doesn't seems to me as a good design. Could you Help or give any advice please? Thank u. Petr
... View more
04-19-2020
10:41 PM
1 Kudo
I have written a blog on this, Kindly refer to blog to setup dbcp connection pool Lookup controller service and execute same query in multiple databases. Please follow this link, it is with an example with step by step instructions to setup the same: https://bigdata-galaxy.blogspot.com/2020/04/nifi-querying-multiple-databases-using.html
... View more
04-15-2020
05:01 PM
Hi @ChineduLB , You can use `.groupBy` and `concat_ws(",",collect_list)` functions and to generate `ID` use `row_number` window function. val df=Seq(("1","User1","Admin"),("2","User1","Accounts"),("3","User2","Finance"),("4","User3","Sales"),("5","User3","Finance")).toDF("ID","USER","DEPT") import org.apache.spark.sql.expressions.Window df.groupBy("USER"). agg(concat_ws(",",collect_list("DEPT")).alias("DEPARTMENT")). withColumn("ID",row_number().over(w)). select("ID","USER","DEPARTMENT").show()
... View more
10-16-2019
05:07 AM
@Shu_ashu Thank you for the solution. I have got the issue resolved as it is working as expected.
... View more
10-14-2019
01:13 AM
@Shu_ashu Great!. With the instructions you have given me, the output file is created correctly. I also tried the option: CREATE TABLE AS scenariox_out AS SELECT select count (*) from scenariox; And the output file was created in: /user/hive/warehouse/scenariox.db/scenariox/scenariox_out/000000_0 Thank you. You have a good day
... View more