Member since
06-26-2015
505
Posts
129
Kudos Received
114
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
781 | 09-20-2022 03:33 PM | |
2522 | 09-19-2022 04:47 PM | |
1439 | 09-11-2022 05:01 PM | |
1522 | 09-06-2022 02:23 PM | |
2334 | 09-06-2022 04:30 AM |
07-13-2022
05:16 AM
Hello Matt, Thank you ! this solved the error (now I'm facing another one, but will figure it out 🙂 ). For further reference I had to configure those 3 lines in nifi.properties : nifi.security.identity.mapping.pattern.dn=^CN=(.*?), OU=(.*?) nifi.security.identity.mapping.transform.dn=NONE nifi.security.identity.mapping.value.dn=$1@$2 Thanks. Vince.
... View more
07-12-2022
07:28 PM
1 Kudo
Thank you so much for the help! this solved my problem
... View more
07-08-2022
11:47 AM
@Luwi An "active content claim" would be any content claim where a FlowFile exist still referencing bytes of content in that claim. A NiFi content claim file can contain the content for 1 too many FlowFiles. So all it takes is one small FlowFile still queued in some connection anywhere on your NiFi canvas to prevent a content claim from being eligible to be moved to archive. This is why the total reported content queued on yoru canvas will never match the disk usage in your content_repository. This article is useful in understanding this process more: https://community.cloudera.com/t5/Community-Articles/Understanding-how-NiFi-s-Content-Repository-Archiving-works/ta-p/249418 Thank you, Matt
... View more
07-07-2022
03:42 PM
1 Kudo
@linssab , You are probably running into this issue described in NIFI-9241, which has been fixed on NiFi 1.15. Cheers, André
... View more
07-07-2022
08:51 AM
2 Kudos
@sayak17 If you are simply looking to GET from a REST API endpoint and take the response and write to a local file on the server where your NiFi service is running, you'll want to use the InvokeHTTP processor and feed that to your putFile processor. If you found this response assisted with your query, please take a moment to login and click on "Accept as Solution" below this post. Thank you, Matt
... View more
07-07-2022
04:14 AM
Hello @stale , Have you already fix this issue? I am facing the same problem with same version 7.6.5 on kerberized cluster
... View more
07-06-2022
01:54 AM
1 Kudo
@araujothe next step is with InvokeHTTP processor to update the stocks of products via Rest Endpoint
... View more
07-05-2022
10:50 PM
1 Kudo
Issue created : https://issues.apache.org/jira/browse/NIFI-10197
... View more
07-05-2022
11:33 AM
1 Kudo
@pk87 Also consider that using timer driven may not always give you 60 minute scheduling. When using timer driven the component will get scheduled upon start and then again x configured amount of time later. A NiFi restart or stopping and starting the processor will reset this. If you need to make sure that a component is only scheduled every x amount of time consistently, you should be using cron driven scheduling strategy which will allow you to set specific time of schedule. Thanks, Matt
... View more
07-05-2022
05:17 AM
@dida The following connector configuration worked for me. My schema was stored in Schema Registry and the connector fetched it from there. {
"connector.class": "com.cloudera.dim.kafka.connect.hdfs.HdfsSinkConnector",
"hdfs.output": "/tmp/topics_output/",
"hdfs.uri": "hdfs://nn1:8020",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"name": "asd",
"output.avro.passthrough.enabled": "true",
"output.storage": "com.cloudera.dim.kafka.connect.hdfs.HdfsPartitionStorage",
"output.writer": "com.cloudera.dim.kafka.connect.hdfs.parquet.ParquetPartitionWriter",
"tasks.max": "1",
"topics": "avro-topic",
"value.converter": "com.cloudera.dim.kafka.connect.converts.AvroConverter",
"value.converter.passthrough.enabled": "false",
"value.converter.schema.registry.url": "http://sr-1:7788/api/v1"
} Cheers, André
... View more