Member since
09-29-2015
31
Posts
34
Kudos Received
18
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1404 | 08-07-2018 11:16 PM | |
3445 | 03-14-2018 02:56 PM | |
3235 | 06-15-2017 10:13 PM | |
10530 | 06-05-2017 01:40 PM | |
6261 | 05-17-2017 02:52 PM |
08-07-2018
11:16 PM
As a result of the the HDF build versioning and how the NiFi extension manager handles versions there is, unfortunately, one additional NAR that is needed. You should also provide the nifi-standard-services-api-nar-1.5.0.3.1.2.0-7.nar that coincides with nifi-aws-service-api-nar-1.5.0.3.1.2.0-7.nar. Without this, I would imagine that in your log you will see warnings that it could not find the needed standard services api NAR. This additional requirement is a biproduct of the HDF specific builds.
... View more
03-14-2018
02:56 PM
Hi @Akananda Singhania, I suspect your network configuration on your Docker Engine host is incorrect. Running the image you listed works as anticipated in a few of the environments available to me. Let's try to confirm this suspicion by running the following: docker run busybox ping -c 1 files.grouplens.org
You should receive output similar to the following. If not, the configured DNS server is not appropriately routing to external sites. PING files.grouplens.org (128.101.34.235): 56 data bytes
64 bytes from 128.101.34.235: seq=0 ttl=37 time=39.263 ms
--- files.grouplens.org ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 39.263/39.263/39.263 ms Could you provide more details about your environment in which you are running Docker? Of interest would be the output of cat /etc/resolv.conf Another option is to try explicitly specifying a DNS server such as those that Google makes available via a command such as: docker run --dns 8.8.8.8 -d -p 8080:8080 apache/nifi
... View more
06-16-2017
04:15 PM
Definitely thought I had the link on there before. Specifically, you can find it here: https://github.com/apache/nifi-minifi/blob/master/minifi-docs/src/main/markdown/System_Admin_Guide.md#flowstatus-script-query
... View more
06-15-2017
10:13 PM
The best option to inquire about the state of the flow would be to make use of the FlowStatus querying functionality. You can use this to interpret the flow. This is roughly analogous to the statistics available in the NiFi UI. Considerations and talk to build upon this, especially under the context of C2, are definitely important ideas for handling operational ease and understanding of how instances are behaving. Hopefully the flow status querying is helpful in the interim until a more feature rich mechanism is in place. Issuing a manual flushing of the queue is something that is not likely to be possible so things like backpressure and expiration periods become very important for connections to help mitigate against such issues.
... View more
06-05-2017
01:40 PM
3 Kudos
You would create a dynamic attribute to select the key of interest. Consider the following sample doc: {
"key-1": "value 1",
"key-2": "value 2",
"key-3": "value 3"
}
In this case, if we needed to route based on the value of key-2, we could create a property in the processor that is $.key-2 and assign this to some name, such as "routing.value". With that bit of information extracted as an attribute, we can feed the flowfiles from the EvaluateJsonPath processor. If the locations you mention are file locations, we could potentially simply use a PutFile with expression language to specify a path making use of the routing.value by defining Directory to use the attribute with something like "/path/to/my/data/${routing.value}" A more powerful and flexible case would be where each flowfile gets sent to a RouteOnAttribute processor after EvaluateJsonPath. In this case, we could define routes for each of the cases and allow them to be sent on to other NiFi components. For instance, maybe some things go to disk and others go to JMS. We can create relationships for RouteOnAttribute that will then allow us to connect each type to its respective processing path.
... View more
05-17-2017
02:52 PM
@Anthony Murphy This should be fine if you have the appropriate volumes mapping to the specified directories. In this case, you should have three separate Docker volumes mapping your host-based shared location to the three directories in question. This will allow the Docker daemon to write to the external mappings and free it from the container. You would then invoke a new container with these same mappings and it will pick up where things left off. If this is how you are attempting things, if you could please comment with your run command, we can certainly debug why things might be coming up short, but running a quick trial on my system, it looks like things are behaving as anticipated.
... View more
03-02-2017
02:47 PM
Hi @omer alvi, You are getting an illegal character in the query which I am assuming is the | (pipe) character. You may need to url encode your url. Luckily, you can achieve this with NiFi Expression Language. Of note is the urlEncode function, with docs available at https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#urlencode.
... View more
03-01-2017
02:27 PM
2 Kudos
Evaluate the MergeContent processor: https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi.processors.standard.MergeContent/index.html.
... View more
02-24-2017
03:21 PM
2 Kudos
Templates are owned by a process group (whether that is the root process group or one nested in the canvas). You can upload templates by making use of the '/process-groups/{id}/templates/upload' to upload a template to a particular process group.
... View more
01-23-2017
07:19 PM
1 Kudo
Yes, this dynamic will not work in the current scenario. There is some work under way and has been proposed to help in these scenarios. You can read about that here: https://cwiki.apache.org/confluence/display/NIFI/Configuration+Management+of+Flows As mentioned prior, the only way to accomplish what you are looking for is to perform a docker commit and use that image as the new point of reference any time you want to capture the current state of the instance but would include the totality of the running instance. There are some alternative storage drivers that allow volumes to be shared which are also covered at the previously linked Docker documentation, such as Flocker. https://docs.docker.com/engine/tutorials/dockervolumes/#/mount-a-shared-storage-volume-as-a-data-volume
... View more