Member since
06-08-2017
1049
Posts
518
Kudos Received
312
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 11118 | 04-15-2020 05:01 PM | |
| 7019 | 10-15-2019 08:12 PM | |
| 3061 | 10-12-2019 08:29 PM | |
| 11244 | 09-21-2019 10:04 AM | |
| 4189 | 09-19-2019 07:11 AM |
04-05-2019
01:57 AM
@Nera Majer I tried on my local instance and everything works as expected. - If you have .gz file in local FS then try to fetch the file ListFile+FetchFile from your local FS(instead of HDFS) and check are you able to fetch the whole file without any issues?. - Move Local file to HDFS using the below command. hadoop fs -put <local_File_path> <hdfs_path> then check are u able to get the file size as 371kb in hdfs? - If yes then try to run ListHDFS+FetchHDFS processors to fetch the newly moved file into HDFS directory. - Some threads related to similar issue. https://community.hortonworks.com/questions/106925/error-when-sending-data-to-api-from-nifi.html https://issues.apache.org/jira/browse/NIFI-5879
... View more
01-16-2019
12:06 PM
if possible please share processor configuration sreen-shot
... View more
01-05-2019
11:28 PM
1 Kudo
@PP Include over() clause in your select query . Try with below query: select row_number() over(),* from testDB.testTable; Example: select row_number() over() as rn,* from ( select stack(2,1,"foo",2,"bar") as (id,name) )t;
+-----+-------+---------+--+
| rn | t.id | t.name |
+-----+-------+---------+--+
| 1 | 1 | foo |
| 2 | 2 | bar |
+-----+-------+---------+--+ "rn" is the row number column that we have added in the above result.
... View more
12-13-2018
12:51 AM
@Julio Gazeta Weird, i'm able to get the state value if i keep store state locally. Regards to GetMongo processor the flowfile attributes issue got resolved in NiFi-1.8 NIFI-5334 addressing this issue. As a word around to get required attribute refer to this link.
... View more
12-12-2018
01:34 AM
Thank you very much for the explanation regarding the work around to delete the row.
... View more
12-01-2018
08:38 PM
@Hemanth Vakacharla i think for this case we need to split the records one line each by using SplitRecord/SplitText processor. Then Using MergeContent processor we can do 500 MB splits by using this way we are not going to have splitting records in between. Flow: 1.SplitRecord/SplitText //split the flowfile 1 line each
2.MergeRecord/MergeContent //to get 500MB filesize To force merge flowfiles use MaxBigAge property like 30 mins..etc. In case if you are using Record oriented processors we need to define Record Writer/Reader with avro schema to read/write the flowfile. Refer to this link for more details regards to merge content processor.
... View more
11-26-2018
05:29 PM
NO! if you are trying to escape input to generate a sql query you should never roll your own sanitization unless you fully trust the input. THIS IS VULNERABLE TO SQL INJECTION! You should be using the '?' parameter substitution in your putsql stage.
... View more
11-23-2018
10:26 PM
@Shu I am sorry that i didnt mention this earlier, i actually use correlation attribute "filename" and i would be having 500 filenames/second. Thanks for the links.
... View more
11-23-2018
05:54 AM
Thanks @Shu ! It worked, I was using 'Return type' as auto instead of json in EvaluateJsonPath. Thanks for the reply 🙂
... View more
11-20-2018
08:43 AM
2 Kudos
@Mahendra Hegde it's possible in NiFi but it depends on how frequently you are going to restart NiFi instance. Use GenerateFlowFile (or) other processors and schedule the processor as shown below With the above schedule GenerateFlowFile processor going to run once we start processor and next run will be after 1111111110 sec (or) when you restart NiFi instance. For your case adjust the Run Schedule time based on your NiFi restart time.
... View more