Member since
04-11-2016
471
Posts
325
Kudos Received
118
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2681 | 03-09-2018 05:31 PM | |
| 3551 | 03-07-2018 09:45 AM | |
| 3240 | 03-07-2018 09:31 AM | |
| 5442 | 03-03-2018 01:37 PM | |
| 2951 | 10-17-2017 02:15 PM |
10-24-2016
10:17 AM
Could you share the logs you have for both namenodes?
... View more
10-24-2016
10:14 AM
Hi @Jessika314 ninja, Can you confirm that ZooKeeper is OK and running? Could you check the logs? Did you try restarting HDFS service through Ambari? Thanks.
... View more
10-21-2016
01:16 PM
Hi, If you auto-terminate relationships, it means that flow files that are routed to this relationship will be purely deleted. This is often used on endpoint processors of your workflow. Hope this helps.
... View more
09-29-2016
08:50 AM
There is absolutely nothing wrong with having a node as both cluster coordinator and primary node. These are two different roles that can be done on the same node.
... View more
09-29-2016
08:10 AM
1 Kudo
@mayki wogno The workflow displayed on the canvas is executed on each node of your cluster. Consequently, unless you have configured your GetHDFS processor to run on primary node only (in the scheduling tag of your processor configuration), each node of your cluster will get file from HDFS. This can create race condition and you should set your processor to run on primary node only. In this case, in your cluster page description, you can know which node has been elected as the primary node. In order to balance the load when getting files from HDFS you may want to use the combination of List/FetchHDFS processors. The ListHDFS processor will create one flow file per listed file with the path of the file, and the FetchHDFS processor will actually get the file from HDFS. By using a Remote Process Group in your canvas you can actually evenly spread your flow files between your nodes and each node will be assigned different files to fetch from HDFS. You can find an example of what I mean at the end of this HCC article: https://community.hortonworks.com/content/kbentry/55349/nifi-100-unsecured-cluster-setup.html Hope this helps.
... View more
09-23-2016
08:47 AM
1 Kudo
@David Cuny, All the information you are looking for should be provided by the person in charge of setting the SNMP agent you are requesting. It depends of how is configured the SNMP agent on the device you are requesting. The OID is the "address" of the information you want to retrieve. Let's say you want to get the current CPU use of the device you want to monitor, then you have to know the OID of this information (usually it is provided with the device in a MIB). Some of widely used OIDs can be found on this blog for example: http://www.debianadmin.com/linux-snmp-oids-for-cpumemory-and-disk-statistics.html Hope this helps.
... View more
09-22-2016
05:32 PM
I'd add as a comment to Matt's answer that you can check the content of your attributes using the expression language [1]. It provides a lot of functions to deal with attributes and apply conditions to route your flow files if and only if they respect the conditions you want. [1] https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html
... View more
09-22-2016
02:23 PM
2 Kudos
@Frank Maritato You have the good approach, and IMO it should work. I have followed a similar approach in this blog: https://pierrevillard.com/2016/04/04/analyze-flickr-account-using-apache/ In your screenshot, all your processors are stopped. When started do you have errors in your InvokeHTTP processor? Can you inspect the requests sent to be sure the increment is done correctly? Hope this helps.
... View more
09-22-2016
07:56 AM
3 Kudos
@Mohan V Yes, MergeContent would be a solution to your problem. You have to use it before the PutFile processor in order to merge multiple flow files (each containing one json) into one flow file (containing multiple json). You may want to have a look at the documentation here: https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi.processors.standard.MergeContent/index.html In particular you have a property specifying the number of JSON you want to be merged into one single file: 'Minimum Number of Entries' As a side note, when you have a processor on your canvas, you can right click on it and go to 'Usage' to display the documentation of the processor. Hope this helps.
... View more
09-21-2016
09:07 PM
Yes correct. You could also do both in once with ExtractText processor, but you definitely can use UpdateAttribute and expression language functions to get the result you want. Also if you only need the filename for a processor in a property accepting expression language, you don't need the intermediate step. For example: ${mypath:substringAfterLast('/')} https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html Hope this helps.
... View more