Member since
06-23-2016
43
Posts
3
Kudos Received
0
Solutions
06-20-2017
09:14 AM
Thanks for the update.
... View more
11-09-2016
11:09 AM
@Bryan Bende @jfrazee We have one use case where we want to parse both HL7 and CCDA messages. Is there any way by which we can parse CCDA messages as well?
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache NiFi
11-09-2016
06:26 AM
@Artem Ervits it's dummy data.
... View more
11-08-2016
05:24 PM
Thanks It worked when I changed it to "flowfile-content". I just have one more question, is it possible to extract single segment from a HL7 message. for example below is my message -- MSH|^~\&|||||20160229002413.415-0500||MDM^T02|7|P|2.3
EVN|T02|201602290024
PID|1||599992601||cunningham^beatrice^||19290611|F
PV1|1|O|Burn center^60^71
TXA|1|CN|TX|20150211002413||||||||DOC-ID-10001|||||AU||AV
I want to extract only PID segment from this message. output should be - PID|1||599992601||cunningham^beatrice^||19290611|F
... View more
11-08-2016
05:24 PM
I'm doing HL7 parsing using ExtractHL7Attribute processor. My data flow looks like this - I'm getting above error when storing in Hbase table. i'm storing this message.mdm-t02-x2.txt I print the flow file using a custom processor attached is the output of AttributeToJson processor -json-output.txt . I can see the generated json in attached file still it does not get store in Hbase table and throw above error. Can someone please help. Thanks
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache NiFi
11-08-2016
02:23 PM
Hi @Bryan Bende I've a question regarding the issue "org.apache.nifi.controller.Uninheritable Flow Exception: Failed to connect node to cluster because local flow is different than cluster flow" I had 3 node cluster, I was working on one of the node and other two were down. I created some data flows and now i want to replicate this to other two nodes, So I simply restarted other two assuming they should replicate the flow automatically. but I got above error. so I deleted the old flow.gz.xml, user.xml and authorization.xml from two down nodes, cleaned the log folder and restarted again. Still got the same error in logs. Attached are the logsnifi-app.txt Am I doing something wrong here? What is the best way to replicate the data when adding a new node in running cluster or making a down node up from existing cluster.
... View more
10-12-2016
01:21 PM
yes mentioned DN was part of Node identities. what is /proxy? cannot find it under nifi setup. is it an autogenerated file?
... View more
10-12-2016
01:10 PM
I'm trying to setup 4 node secure NIFI cluster. I have added all the required properties, i can see nodes sending heartbeats in logs but on screen i'm getting Untrusted proxy message. error screen shot attached. authorizers.xml contains - <authorizer> <identifier>file-provider</identifier>
<class>org.apache.nifi.authorization.FileAuthorizer</class> <property name="Authorizations File">./conf/authorizations.xml</property> <property name="Users File">./conf/users.xml</property>
<property name="Initial Admin Identity">CN=myAdmin, OU=MY-ORG</property>
<property name="Legacy Authorized Users File"></property> <property name="Node Identity 1">CN=hostname1, OU=NIFI</property>
<property name="Node Identity 2">CN=hostname2, OU=NIFI</property> <property name="Node Identity 3">CN=hostname3, OU=NIFI</property>
<property name="Node Identity 4">CN=hostname4, OU=NIFI</property>
</authorizer>
... View more
Labels:
- Labels:
-
Apache NiFi
10-05-2016
02:45 PM
That's true, writing 1M small messages on Hdfs doesn't make any sense. i was performing it just to cover a test case. My second use case is to write data in one single flow file. I can use "Message Demarcator" property of consumeKafka for that. just wanted to understand what exactly would happen if i set it to 10000 or any number. does it write these many messages to a single flow file?
... View more
10-05-2016
11:46 AM
@mclark Thanks for your response. I'm posting 1 million messages of 4-5 bytes each on kafka topic in single run. my intend is to know the time taken to ingest these 1M messages in Hdfs. 1. One way i can think of is to do everything manually. Note the start time of process and check again when all messages are ingested in Hdfs .But this will not work if I set "Message Demarcator" property as there would be less number of files on Hdfs and i'll never know when nifi has stopped/completed writing. 2. Another way I can think of achieving this by writing a custom processor which will capture start and end time of process. 3. Is this correct approach to achieve this? 4. Is there any Bench-marking post/document available which can give some figures of NIFI ingestion speed.
... View more