Member since
04-11-2016
471
Posts
325
Kudos Received
118
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2070 | 03-09-2018 05:31 PM | |
2631 | 03-07-2018 09:45 AM | |
2529 | 03-07-2018 09:31 AM | |
4388 | 03-03-2018 01:37 PM | |
2468 | 10-17-2017 02:15 PM |
05-02-2016
07:59 AM
Hi, Could you have a look at YARN logs? Just to be sure that there is enough memory tu run containers.
... View more
05-02-2016
07:40 AM
How many flow files in the queue? It sounds like you used a processor generating large amounts of FlowFiles and your computer was not able to handle so much. You may want to consider back pressure features, or changing the scheduling of processors.
... View more
05-01-2016
10:03 AM
Hi Raj, if it helped solving your issue (if not, let me know), would you mind accepting the answer on this thread. It will help other users that could be in the same situation when searching for relevant information. Thanks a lot.
... View more
05-01-2016
09:59 AM
1 Kudo
Hi Raj, Have you flows running with NiFi? If processors were running when stopping NiFi, they will be running when NiFi starts. If yes, could you tell us what processors are running? Also, could you have a look into the logs (nifi-app.log) to copy/paste here anything that could be relevant? Regarding repositorie: the Content Repository holds the content for all the FlowFiles in the system, and the FlowFile repository keeps track of the attributes and current state of each FlowFile in the system. (I'd recommend having a look here: https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html)
... View more
04-30-2016
01:44 PM
5 Kudos
Hi Raj, What you are trying to achieve is possible. No need for other components. A flow file is composed of a content part and an attributes part (key/value map). When using the GetSFTP processor the content of your HL7 messages is in the content part of the created FlowFiles. When you use the ExtractHL7Attributes processor, you extract some parts of your content and set it as new key/value of your attributes part. At the end the PutFile only put the content of the incoming FlowFile. This is why you don't see any modification. As an advice, when you add a processor on the canvas, you can right click on it and then click on usage to have a complete documentation about the processor. One option would be to extract all the information you want from the HL7 message using ExtractHL7Attributes processor, then, once you have all the attributes you want, to use an AttributeToJson processor that will create a FlowFile based on the attributes part and then use PutHbaseJSON processor to store the data into HBase. A source of inspiration would be this article (even if it is not HBase at the end): https://community.hortonworks.com/articles/20318/visualize-patients-complaints-to-their-doctors-usi.html Hope that helps.
... View more
04-30-2016
01:34 PM
No this is because I didn't export every variable out of the code. If you look at the main class describing the topology (https://github.com/pvillard31/storm-twitter/blob/master/src/main/java/fr/pvillard/storm/topology/Topology.java), you will see that I reference the entry points to write into HDFS (line 38), and into Hive (line 36) as well. You should update this class with your own parameters and rebuild the topology using Maven (or better, export the variables as arguments that you give when running the command).
... View more
04-30-2016
09:04 AM
1 Kudo
As suggested by Ravi, have a look at : https://community.hortonworks.com/articles/30213/us-presidential-election-tweet-analysis-using-hdfn.html It will show you how to connect NiFi and Spark if you want to use this kind of architecture to perform machine learning. Regarding your second question, GetTwitter does not accept ingoing connection. What are you trying to achieve?
... View more
04-30-2016
09:00 AM
Try to run the topology in local mode. It will be easier to see what is happening.
... View more
04-29-2016
04:27 PM
I am sorry... I didn't look carefully the first time... In NiFi we are using the 'transport' port (9300 by default) to exchange with elasticsearch and not the 'http' port (9200 by default). Can you check if it is working with port 9300?
... View more
04-29-2016
03:13 PM
Could you check that the issue is not on Elasticsearch side by sending your JSON message manually? (see https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-index_.html) curl -XPUT 'http://localhost:9200/twitter/tweet/1' -d '{
"user" : "kimchy",
"post_date" : "2009-11-15T14:12:12",
"message" : "trying out Elasticsearch"
}'
... View more