Member since
09-29-2015
871
Posts
723
Kudos Received
255
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 4285 | 12-03-2018 02:26 PM | |
| 3234 | 10-16-2018 01:37 PM | |
| 4337 | 10-03-2018 06:34 PM | |
| 3196 | 09-05-2018 07:44 PM | |
| 2442 | 09-05-2018 07:31 PM |
07-11-2017
03:11 PM
You currently can't use ConsumeKafkaRecord_0_10 to consume Confluent Avro. The Confluent Avro is a special Avro format that contains additional information and cannot be read by regular Avro readers. On the master branch of Apache NiFi there is support for integration with Confluent, there will be a new option in the "Schema Access Strategy" for "Confluent Content-Encoded Schema Reference" which will allow it to read the Confluent Avro.
... View more
07-10-2017
09:19 PM
This was answered in another question, but posting here for redundancy... In Apache NiFi 1.3.0 this is not possible, you are publishing regular Avro with an embedded schema, and then trying to consume Avro that is expected to be in Confluent schema format. In the master branch of Apache NiFi there is integration with Confluent schema registry, and also a new option in the "Schema Write Strategy" on the AvroRecordSetWriter - "Confluent Schema Registry Reference". When using PublishKafkaRecord_0_10 with an AvroRecordSetWriter that is configured with "Confluent Schema Registry Reference" you should be publishing Avro that can then be deserialized using the Confluent deserializer.
... View more
07-10-2017
09:14 PM
The permissions should be hierarchical, so everything underneath the process group should inherit the policy you created, unless you create a more specific policy on a component within that process group, then the more specific policy would take precedence.
... View more
07-07-2017
02:32 PM
In Apache NiFi 1.2.0 and 1.3.0 (HDF 3.0.0) there is a ConvertRecord processor that can convert between any combination of Avro, JSON, and CSV.
... View more
07-06-2017
12:44 AM
In addition to what Matt suggested, make sure the 4 new nodes can reach the 4 original nodes using the hostnames you use to access the UI of the original node. If you have nifi-original-1, nifi-original-2, nifi-original-3, nifi-original-4, you would want to SSH to nifi-new-1 and make sure you can ping the 4 original hostnames.
... View more
07-05-2017
02:26 PM
This error is showing that one of your "List" processors is starting and its attempting to migrate the state information from the old way of storing state in the DistributedMapCache, to the new way in the StateManager (ZK or local WAL). The error is because the processor has a DistributedMapCacheClient configured and the client can't connect to the DistributedMapCacheServer. If you have state that you believe needs to be migrated, then ensure that the DistributedCachServer is correctly configured and running, and ensure the DistributedCacheClient is configured with the correct host and port for where the server is running. If you don't have old state that needs to be migrated, just change your "List" processor so that it doesn't use a DistributedMapCacheClient and then it will skip over this migration. The cache client was only required in older versions of NiFi before the internal StateManager was introduced.
... View more
06-29-2017
05:06 PM
2 Kudos
The reason this isn't working is because AbstractDatabaseFetchProcessor is in nifi-standard-nar which is not on the classpath of your custom NAR at runtime. You could add a NAR dependency in your NAR pom on nifi-standard-nar (you already have the dependency but its currently marked provided): <dependency>
<groupId>org.apache.nifi</groupId>
<artifactId>nifi-standard-processors</artifactId>
<version>1.2.0</version>
<type>nar</type>
</dependency> However, the better approach here would be to refactor things such that AbstractDatabaseFetchProcessor lived inside of some utility JAR that could be re-used by standard NAR and your NAR. There could be a nifi-db-utils module here: https://github.com/apache/nifi/tree/master/nifi-nar-bundles/nifi-extension-utils That would be a cleaner approach and follow the pattern used for other abstract processors.
... View more
06-28-2017
07:48 PM
1 Kudo
You have to implement a custom reporting task that pages through the events and does something with them... A ReportingTask has access to a ReportingContext which has access to EventAccess which has a method for paging through the flow change events. Here is an example: https://github.com/bbende/incubator-atlas/blob/NIFI/addons/nifi-bridge/nifi-atlas-bundle/nifi-atlas-reporting-task/src/main/java/org/apache/nifi/reporting/atlas/AtlasReportingTask.java#L117-L139
... View more
06-28-2017
05:07 PM
1 Kudo
When a processor is started, it shows green which means it is scheduled to run accordingly to the scheduling strategy, and when its stopped it shows red which means it is not scheduled to run. So even if a processor is on a CRON schedule for once a day, it will be green all the time because its still scheduled, it will only be red if you specifically say to stop the processor. When the processing is triggered to run on a CRON schedule, it doesn't run for a certain amount of time, it runs once (one call to onTrigger of the processor), so it depends what the call to onTrigger does... GetHDFS has a Batch Size property which specifies how many files to pull in one execution, so you would need your batch size to be greater than however many files are going to be in the directory so that it can grab them all in one execution. Alternatively, ListHDFS should list all files newer since lasting, so you could use ListHDFS + FetchHDFS.
... View more
06-28-2017
04:00 PM
1 Kudo
Using a CRON schedule means the framework will trigger the processor to run once at the specified time, meaning the onTrigger method of the processor will be executed once. The processor does not remain running. CRON is really intended for source processors to schedule pulling data from somewhere at a specified time. For processors in the middle of the flow they should typically be Time Driven with run schedule of 0.
... View more