Member since
07-19-2018
613
Posts
100
Kudos Received
117
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3144 | 01-11-2021 05:54 AM | |
2247 | 01-11-2021 05:52 AM | |
6001 | 01-08-2021 05:23 AM | |
5574 | 01-04-2021 04:08 AM | |
25789 | 12-18-2020 05:42 AM |
10-29-2020
04:39 AM
@Kaur it appears like your nifi node does not have enough system ram to allow you to use 2g and 4g settings. I suggest increasing the node specification to at least 8gb or 16 gb of system ram and test boostrap config with 2g 4g or 4g 8g respectively. If this answer resolves your issue or allows you to move forward, please choose to ACCEPT this solution and close this topic. If you have further dialogue on this topic please comment here or feel free to private message me. If you have new questions related to your Use Case please create separate topic and feel free to tag me in your post. Thanks, Steven
... View more
10-19-2020
01:21 PM
The solution you are looking for is: ReplaceText: https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.12.0/org.apache.nifi.processors.standard.ReplaceText/ You can find loads of examples here in the forum with this search: https://community.cloudera.com/t5/forums/searchpage/tab/message?advanced=false&allow_punctuation=false&q=replaceText If this answer resolves your issue or allows you to move forward, please choose to ACCEPT this solution and close this topic. If you have further dialogue on this topic please comment here or feel free to private message me. If you have new questions related to your Use Case please create separate topic and feel free to tag me in your post. Thanks, Steven
... View more
10-16-2020
01:53 AM
After I have not been able to find a solution that would be easy to implement inside of NiFi, I've written a small perl (yuk) script that can be used to adjust timestamps in a CSV file to be in ISO8601 format. Maybe it is useful to someone else: #!/bin/perl -w
# This perl script adds timezone information to timestamps without a
# timezone. All timestamps in the input file that follow the format
# "YYYY-MM-DD HH:MM:SS" are converted to ISO8601 timestamps.
use strict;
use DateTime::Format::Strptime;
my $time_zone = 'Europe/Amsterdam';
my $parser = DateTime::Format::Strptime->new(
pattern => '%Y-%m-%d %T',
time_zone => $time_zone
);
my $printer = DateTime::Format::Strptime->new(
pattern => '%FT%T%z',
time_zone => $time_zone
);
while (<>) {
s/(?<=")(\d\d\d\d-\d\d-\d\d \d\d:\d\d:\d\d)(?=")/
my $dt = $parser->parse_datetime($1);
$printer->format_datetime($dt);
/ge;
print;
}
... View more
10-01-2020
04:24 PM
Hi steven. Thanks for the quick response. I'm running this HDP cluster in SUSE 12 SP2. This node has 32 GB RAM and using just 4. Free RAM is 27 GB. Yarn Configuration is like this: ResourceManager Java heap size = 2048 NodeManager Java heap size = 1024 AppTimelineServer Java heap size = 8072 ulimit used by RM process is: core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 128615 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 32768 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 65536 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited --- From RM log file: 2020-09-29 17:15:00,825 INFO scheduler.AbstractYarnScheduler (AbstractYarnScheduler.java:getMinimumAllocation(1367)) - Minimum allocation = <memory:1024, vCores:1> 2020-09-29 17:15:00,825 INFO scheduler.AbstractYarnScheduler (AbstractYarnScheduler.java:getMaximumAllocation(1379)) - Maximum allocation = <memory:24576, vCores:3 No matter how much memory is assigned to RM, it always fails with this Jana OoM. What may be a recommended Java Memory configuration for Yarn components?
... View more
10-01-2020
09:48 AM
@aniket5003 NiFi can be added to ambari using one of the HDF Management packs. Depending on your relationship with Cloudera, you may need to use your account to get after the NiFi 1.12.1 management pack. I do know other versions are out on the open internet (1.9 and below), but newest versions will require a cloudera username and password to access repos and artifacts. Once you have a management pack added to ambari, you should be able to install nifi and other HDF components in an HDP cluster. Additionally you can get 1.12.1 from nifi.apache.org and install outside of ambari interface if you need something quick. If this answer resolves your issue or allows you to move forward, please choose to ACCEPT this solution and close this topic. If you have further dialogue on this topic please comment here or feel free to private message me. If you have new questions related to your Use Case please create separate topic and feel free to tag me in your post. Thanks, Steven
... View more
09-30-2020
08:41 AM
@Elf IMO anything is possible with ambari. That said, out of the box, maybe it would not appear to be possible without some advanced ambari admin skills. I took a look at the link you provided and that is an example of how to spin up a single machine with many of the services you may already have in your ambari cluster. To install griffin in an ambari cluster you would need to pick a node, install griffin, missing requirements (services/components not in your cluster), and thoughtfully modify the configuration to use the existing services from the ambari cluster. For example, feed griffin you configuration locations for hadoop, hdfs, hive, etc and NOT use the specific directions to install those parts based on sample documentation. If you do decide to go down this path, please update here with your progress or create new Questions with specific errors you may have.
... View more
09-30-2020
07:04 AM
@ujay Of course. The link referenced xml is a template file. Click through and get the raw xml code and save to a file. From here you import the template: Then in the upper navigation grab the template icon and drag to the nifi canvas: It should automatically choose the last template uploaded: Once the template is on the canvas click through into the process group created: You will need to do some work in Controller Services so check out the notes in the Red Box. The flow is an example of how to generate many flow files and detect duplicates. Be sure to do some research on the processor (google search) to understand how others have resolved working with the processor as your begin to integrate this into your own flow. This community is also a great research tool too:
... View more
09-30-2020
05:08 AM
@praneet Adding the value to the processor (+) is a suitable method. You just need to make sure you get the right string in that field. It's blocked out, but appears not just the actual token but prepended with "Bearer". Try just the token string. One thing I like to do for any API, before I start to work on invokeHttp configuration, is to use Postman to help me identify all of the required settings to test connection to an API. Once this is complete, I can definitively ensure that nifi invokeHttp is sending the same request. If this answer resolves your issue or allows you to move forward, please choose to ACCEPT this solution and close this topic. If you have further dialogue on this topic please comment here or feel free to private message me. If you have new questions related to your Use Case please create separate topic and feel free to tag me in your post. Thanks, Steven
... View more
09-30-2020
05:03 AM
@Manoj90 In addition o @PVVK point, you need to be careful with routing relationships back on the originating processor. During development I like to use a output port that I call End of Line or EOL1, EOL2, EOL3 as I need more in larger flows. This is to evaluate if something goes to fail, retry, etc. Later once I am certain the flow is working as I need, I either auto terminate these routes, or I route them out of my processor group to an Event Notification system. It looks like this: Using an output port to hold un needed routes during testing If this answer resolves your issue or allows you to move forward, please choose to ACCEPT this solution and close this topic. If you have further dialogue on this topic please comment here or feel free to private message me. If you have new questions related to your Use Case please create separate topic and feel free to tag me in your post. Thanks, Steven
... View more
09-30-2020
04:52 AM
@mansu You need to make sure the file and locations have the correct permissions for the nifi user. For example in linux: chown -R nifi:nifi /path/to/files If this answer resolves your issue or allows you to move forward, please choose to ACCEPT this solution and close this topic. If you have further dialogue on this topic please comment here or feel free to private message me. If you have new questions related to your Use Case please create separate topic and feel free to tag me in your post. Thanks, Steven
... View more