Member since
09-23-2015
42
Posts
91
Kudos Received
8
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1285 | 02-01-2016 08:56 PM | |
2981 | 01-16-2016 12:40 PM | |
6902 | 01-15-2016 01:14 PM | |
5642 | 01-14-2016 09:37 PM | |
7102 | 12-14-2015 01:02 PM |
12-15-2022
05:24 AM
This is working fine. Can we provide Search Value and Replacement Value as Variable or flowfile attribute. As I wanted to use same replace text processor to convert different input files with different number of columns. Basically I want to parameterised the Search Value and Replacement Value in replace text processor. @mpayne @ltsimps1 @kpulagam @jpercivall @other
... View more
05-11-2021
05:09 AM
1 Kudo
Thanks @VidyaSargur - I just started a new thread, per your suggestion.
... View more
04-13-2020
04:31 AM
You have to part the content first as line by line utilizing Split Text Processor. Use regex to extricate values by utilizing Extract Text processor, it will results esteems as traits for the each stream record. Supplant content processor to supplant the traits as substance of the stream record LiteBlue.
... View more
05-02-2017
06:29 PM
Thanks, this is very useful. How would one go about getting the application name? Is it the app name or the app ID or something else? Thanks!
... View more
05-24-2016
06:23 AM
Mr.Dyer, your article inspired me!! This is such an inspirational post.I'm a new bee to the hadoop world, and you made my day.May god bless you with more technologies.
... View more
04-07-2017
06:15 PM
This would be really cool if you provided instructions on how you setup the processors
... View more
04-22-2016
09:31 PM
Oh and don't forget to put a PutSQL processor to actually execute the sql command that ConvertJSONToSQL processor does for you 🙂
... View more
12-02-2015
03:51 PM
Update regarding the HDFS Replication configuration for solr files, there is an open Jira for this SOLR-6305 ("Ability to set the replication factor for index files created by HDFSDirectoryFactory")
... View more
11-28-2017
09:10 AM
Hi Team, I have tried above and I see the Job status KILLED after running the workflow. After launching Oozie, I can see the workflow changing status from RUNNING to KILLED. Is there a way to troubleshoot. I can run hadoop fs -ls commands on my s3 bucket so definitely got access. I suspect its the s3 URL. I tried downloading the xml changing the URL and uploading with no luck. Any other suggestions. Appreciate all your help/support in advance. Regards Anil
... View more