Member since
01-11-2016
355
Posts
230
Kudos Received
74
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
7876 | 06-19-2018 08:52 AM | |
2972 | 06-13-2018 07:54 AM | |
3411 | 06-02-2018 06:27 PM | |
3606 | 05-01-2018 12:28 PM | |
5095 | 04-24-2018 11:38 AM |
01-06-2023
01:17 AM
@shekabhi, as this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post.
... View more
11-21-2022
07:08 AM
Did anyone have fix to this issue? I am encountering it also. I am using NiFi 1.16.1. The nifi-app.log is exceeding the 100MB maxFile in the setting.
... View more
11-02-2022
02:21 AM
I have seen that you save the MapRecord as string. By mistake, i saved it also as string due to wrong schema. My record looks like this, any idea how can convert it back to MapRecord from this string format: "[MapRecord[{name=John Doe, age=21, products=[Ljava.lang.Object;@f8495e3, type=end-user, description=This is an end user}]]" Thanks!
... View more
10-04-2021
05:24 AM
Hello Team, @ahadjidj Running this suggested command mvn clean install -Pinclude-atlas -DskipTests referrinng to the pom xml file here located: /work/nar/framework/nifi-framework-nar-1.14.0.nar-unpacked/META-INF/maven/org.apache.nifi/nifi-framework-nar/pom.xml I get this error message : [WARNING] The requested profile "include-atlas" could not be activated because it does not exist. [ERROR] Failed to execute goal org.apache.nifi:nifi-nar-maven-plugin:1.3.1:nar (default-nar) on project nifi-evtx-nar: The plugin org.apache.nifi:nifi-nar-maven-plugin:1.3.1 requires Maven vers ion 3.1.0 -> [Help 1] I got this message: The requested profile "include-atlas" could not be activated because it does not exist. Could you please provide any hint? As we need to introduce the lineage into Atlas with info coming from Nifi, this area in currently on priority as we are stucked on this. Thanks a lot! Daniele.
... View more
09-07-2020
12:02 AM
We have the similar requirements. We had small POC which worked very well. When we started assessing NFR, we stuck with this bottle neck. Issue is: 1. We have a clustered environment with three nodes. 2. To check what we did, we had set of processors. And ran a flow. Then we stopped one of them from primary node. All information got reflected correctly on all nodes. 3. We scaled down primary node from where we ran the flow. 4. Earlier we were able to see replicated stuck/queued message on all non-primary nodes. As soon as, primary node was down, other nodes does not show that queued message. 5. When we started back earlier primary node, we can see everything good. Is there any plan for support for this HA scenario for apache ni-fi? https://cwiki.apache.org/confluence/display/NIFI/High+Availability+Processing Please suggest.
... View more
07-16-2020
11:27 AM
Hello @sherrine As mentioned by a couple of other users as well, NiFi operates in a different playground and comes with a few limitations. Below I have made a comparison between Streamsets and Talend. However, some other tools which you can also prefer include Fivetran, Sprinkle Data and Matillion.
... View more
04-09-2020
01:29 PM
You connect to H2 database like any other database using JDB connection. Here is the link where you can find H2 driver and documentation. www.h2database.com/html/download.html By the way, I have also uploaded a video explaining step by step process on connecting to NiFi H2 database. You may check it out here. https://www.youtube.com/watch?v=tsAR2f4uGK4
... View more
10-04-2019
02:46 PM
Hello, I'm looking your answer 3 years later because I'm in a similar situation :). In my company (telco) we're planning using 2 hot clusters with dual ingest because our RTO is demanding and we're looking for mechanisms to monitor and keep in sync both clusters. We ingest data in real-time with kafka + spark streaming, loading to HDFS and consuming with Hive/Impala. I'm thinking about a first approach making simple counts with Hive/Impala tables on both clusters each hour/half hour and comparing. If something is missing in one of the clusters, we will have to "manually" re-ingest the missing data (or copy it with cloudera BDR from one cluster to the other) and re-process enriched data. I'm wondering if have you dealt with similar scenarios or suggestions you may have. Thanks in advance!
... View more
10-02-2019
07:52 AM
It is absolutely possible to do this. However somethings need to be considered: With a multi node Nifi cluster, the local storage must be a single location usually the primary node. This data will not be local to the rest of the cluster nodes. The location should be separate from OS partition and the required nifi repository partitions. This is to avoid corrupting these partitions in the chance local storage consumes all available space. In past projects I have used primary node, with a separate partition to storing files local to NiFi Primary Node. These files are then used outside of NiFi for other purposes. In some projects these files are picked up in NiFi in separate flows, and then re-distributed into the cluster for processing across all nodes. The primary use case here was audit received files directly to disk by Team 1. Some time later Team 2 access files for processing. In this sample Team 1 and Team 2 are completely separate with Security Group based access to nifi (they cannot see each others flows).
... View more