Member since
07-19-2018
613
Posts
100
Kudos Received
117
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3144 | 01-11-2021 05:54 AM | |
2244 | 01-11-2021 05:52 AM | |
5998 | 01-08-2021 05:23 AM | |
5571 | 01-04-2021 04:08 AM | |
25776 | 12-18-2020 05:42 AM |
12-18-2020
05:40 AM
1 Kudo
@hakansan The error is stating your hard drive is full: could not write to file "pg_logical/replorigin_checkpoint.tmp": No space left on device "no space left on device" The solution you need is to investigate cleaning out some files to free up space, expanding disk, etc. If this answer resolves your issue or allows you to move forward, please choose to ACCEPT this solution and close this topic. If you have further dialogue on this topic please comment here or feel free to private message me. If you have new questions related to your Use Case please create separate topic and feel free to tag me in your post. Thanks, Steven
... View more
12-15-2020
05:06 AM
@jainN Great looking flow. The modification you need is to simply remove json route which is combined with csv. Connect json route from Notify to FetchFile. You may need to adjust the wait/notify so that csv is released when you want. The wait/notify is often trickey, so i would recommend working with wait/notify until you understand their behavior. Here is a good article: https://community.cloudera.com/t5/Community-Articles/Trigger-based-Serial-Data-processing-in-NiFi-using-Wait-and/ta-p/248308 You may find other articles/posts here if you do some deeper research on Wait/Notify. If this answer resolves your issue or allows you to move forward, please choose to ACCEPT this solution and close this topic. If you have further dialogue on this topic please comment here or feel free to private message me. If you have new questions related to your Use Case please create separate topic and feel free to tag me in your post. Thanks, Steven
... View more
12-14-2020
01:12 PM
@jainN If you are looking to route flowfiles that end in json versus those that are not, check out RouteOnAttribute with something similar to json => ${filename:endsWith('.json')}. You would use this after using your method of choice to list/fetch the files which would then provide a $filename for every flowfile. With this json property added to RouteOnAttribute you can drag the json route to a triggering flow, and send everything else (not json: unmatched) to a holding flow. NiFi Wait/Notify should be able to provide the trigger logic, but there are many other ways to do it with out wait/notify by using another datastore, map cache, etc. For example, your non json flow could simply write to a new location and finish. Then your json flow can process that new location some known amount of time later. The logic there is your use case ofc ourse, the point is to use RouteOnAttribute to split your flow. If this answer resolves your issue or allows you to move forward, please choose to ACCEPT this solution and close this topic. If you have further dialogue on this topic please comment here or feel free to private message me. If you have new questions related to your Use Case please create separate topic and feel free to tag me in your post. Thanks, Steven
... View more
12-14-2020
07:05 AM
@toutou From your hdfs cluster you need hdfs-site.xml and correct configuration for PutHDFS. You may also need to satisfy creating a user with permissions on the hdfs location. Please share PutHDFS processor configuration, and error information to allow community members to respond with specific feedback required to solve your issue. If this answer resolves your issue or allows you to move forward, please choose to ACCEPT this solution and close this topic. If you have further dialogue on this topic please comment here or feel free to private message me. If you have new questions related to your Use Case please create separate topic and feel free to tag me in your post. Thanks, Steven
... View more
12-13-2020
10:05 PM
Yes, That's correct Answer and it works. But do we have any other workaround, as we have disabled exec due to security reasons. So how to achieve this.?
... View more
12-04-2020
05:16 AM
@SandeepG01 Ahh no fun with bad filenames. Space in filename is highly not recommended in these days and times. That said, a solution you might try is to \ (backslash) the space. Especially in the context of passing the filename in flowfile attributes. If you still need to allow spaces and cannot resolve upstream (do not use spaces), i might suggest submitting your experience over on the NiFI jira as a bug: https://issues.apache.org/jira/projects/NIFI/issues If this answer resolves your issue or allows you to move forward, please choose to ACCEPT this solution and close this topic. If you have further dialogue on this topic please comment here or feel free to private message me. If you have new questions related to your Use Case please create separate topic and feel free to tag me in your post. Thanks, Steven
... View more
12-02-2020
12:26 AM
@stevenmatisonthe solution that i find is to get the Oauth2 token from slaesforec by using command Curl. like is explained in this page : https://www.jitendrazaa.com/blog/salesforce/using-curl-with-salesforce-rest-api/ So i create a ExecuteProces NiFi Procesor. And as parameter i put : the file C:/loginInfo.txt contains : grant_type=password& client_id= 3MVG9iTxZANhwsdsdsdsdspr0LstjR3sRat & client_secret=21961212323233121943 & username=jitendra.zaa@demo.com & password=myPWDAndSecurityToken and the i get a response with the authentication token 🙂 (you can use the cmd command Curl -x post -d @LoginInfo.txt Https://test.salesforce.com/.... to test the connection between the local machine and salesforce )
... View more
12-01-2020
08:28 AM
1 Kudo
The problem is that you need something to store the dynamic schemas in. That is where the Schema Registry comes in as it provides a UI and api to add/update/delete schemas. These can then be refrenced from NiFi. It looks like AvroSchemaRegistry allows you to do the similar, minus the ui/api. So you would need to create your schema in your flow, as attribute, and send that to AvroRecorderReader configured against AvroSchemaRegistry. You could use some other data store to store these schemas, but you would need to pull them out into an attribute of the same name configured in the Reader and Registry. https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-registry-nar/1.12.1/org.apache.nifi.schemaregistry.services.AvroSchemaRegistry/index.html The latter method does not give you a way to manage all the schemas, which is why I reference the Hortonworks Schema Registry which does include ability to manage, version actual schemas.
... View more
12-01-2020
04:59 AM
@Vamshi245 Yes, HandleHttpRequest and HandleHttpResponse are used in tandem. Behind the processors is a map cache which holds connection session between request/response processors. If your flowfile (json) coming out of custom HandleHttpRequest is delivered to stock HandleHttpResponse, it will send the json to the original connecting client. If this answer resolves your issue or allows you to move forward, please choose to ACCEPT this solution and close this topic. If you have further dialogue on this topic please comment here or feel free to private message me. If you have new questions related to your Use Case please create separate topic and feel free to tag me in your post. Thanks, Steven
... View more
11-30-2020
11:57 AM
As suggested above, update post with your processor, its reader and writier settings. It sounds like you have something misconfigured. If possible show us a screen shot of your flow too.
... View more