Member since
05-17-2017
23
Posts
1
Kudos Received
0
Solutions
11-13-2017
03:20 AM
@Kiem Nguyen Can you set the policy on the process group itself? 1. Login as admin 2. Select the process group and click on the "Operate" controls on the left. 3. Click on the keys icon. 4. And remove the user_A present under "View the component" and "Modify the component" Also make sure that the sub components are not overriding any parent policies. What I mean to say if the sub components have their own policies, then they do not inherit the policies of the parent component.
... View more
10-24-2017
06:39 AM
Thanks @Abdelkrim Hadjidj, It works well. It is interesting when just use $.* to capture content. I understood more about JsonPath Expression. Thank you again, :d
... View more
10-13-2017
09:01 AM
Thanks for your detail @Abdelkrim Hadjidj After researching, I also have a summary as yours. We have 2 normal way to load balance Firstly, using a HA proxy between Nifi cluster and client In other hand, we can use one node for reception then forward data to S2S and RPG to distribute.
... View more
09-01-2017
12:22 PM
@Kiem Nguyen I highly recommend starting a new question in Hortonworks community connection for this. Diagnosing what caused your node to disconnect and how to resolve is a different topic from how to stop a processor with a disconnected node. It would also be helpful to explain what you mean by "overloaded queue" and what makes you feel the size of your queue triggered your node to disconnect. What error did you see in the nifi-app.log on the node that disconnected. Thanks, Matt
... View more
06-14-2017
02:20 PM
@Kiem Nguyen The configuration is to have the same number of concurrent tasks and partitions. So, with 4 partitions on the topics, you want 4 concurrent tasks. Since you have a 3 node cluster, configure your PublishKafka and Consume_Kafka processors with 2 concurrent tasks and you should be good. For an ideal situation, it would be better if they matched exactly. So, if possible, I would configure the Kafka topics with 6 partitions, or some multiple of three.
... View more
06-06-2017
01:53 PM
1 Kudo
The session provides methods to read and write to the flow file content. If you are reading only then session.read with an InputStreamCallback will give you an InputStream to the flow file content If you are writing only then session.write with an OutputStreamCallback will give you an OutputStream to the flow file content If you are reading and writing at the same time then a StreamCallback will give access to the both an InputStream and OutputStream In your case, if you are just looking to extract a value then you likely need an InputStreamCallback and you would use the InputStream to read the content and parse it appropriately for your data. You can look at examples in the existing processors: https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ExtractText.java#L313-L318 Keep in mind, the above example reads the whole content of the flow file into memory which can be dangerous when there are very large flow files, so whenever possible it is best to process the content in chunks.
... View more
06-05-2017
04:08 AM
It works well. Thanks Matt !!! Hadoop Configuration Resources Properties: point to core-site.xml and hdfs-site.xml after copying them into each Nifi node.
... View more
05-17-2017
10:06 AM
Thanks @Pierre Villard, That exactly what I want: use a RPG pointing to the cluster itself.
... View more
10-30-2017
01:48 PM
Hello, this post is for ListenUDP, ListenTCP, ListenSyslog, and ListenRELP. The ListenWebSocket processor is implemented differently and does not necessarily follow what is described here. I'm not familiar with the websocket processor, but maybe others have experience with tuning it.
... View more