Member since
01-09-2018
33
Posts
3
Kudos Received
0
Solutions
04-16-2023
05:40 AM
Hello! I'm having a timeout problem when sending information to kafka. I have a docker environment with 3 containers: Nifi, Kafka and KafDrop. I'm using NiFi's PublishKafka_1_0 1.21.0 to send information to kafka, but I'm not getting it. I already tried to increase the Max 'Metadata Wait Time' field value to 30sec and I didn't succeed. Also, I ran the command ./kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:ANONYMOUS --operation Read --operation Write -- O peration Describe --topic topic1 because I thought it could be some authentication problem to the kafka host/topic. Also unsuccessful. I don't know what else to do. Help me.
... View more
12-23-2020
06:15 AM
@te04_0172 It appears you have hit a known issue: https://issues.apache.org/jira/browse/NIFI-7954 https://issues.apache.org/jira/browse/NIFI-7831 Looks like these will be addressed in Apache NiFi 1.13 These fixes have already been incorporated in to the Cloudera HDF 3.5.2 release that is currently available. Hope this helps, Matt
... View more
11-13-2019
06:48 AM
After we load over 100 million notes in HBase, I will be using Nifi to listening to a live HL7 feed to keep the data current. Some of these HL7 message are delete message and the rows need to be removed from HBase.
... View more
09-26-2018
05:28 AM
1 Kudo
@Faisal Durrani SSM is an app running on DataPlan Services (DPS) and operates on top of the platform. DataPlane serves as a management layer across clusters on-premises or in the cloud.
Data Lifecycle Manager Data Steward Studio Streams Messaging Manager Data Analytics Studio Here is a link_to_SMM to the procedure to install SMM and the other components, remember you MUST have a HDP or HDF cluster to deploy DPS components like SMM HTH
... View more
07-11-2018
09:01 AM
@Faisal Durrani Use UpdateRecord processor before PutHBaseRecord Processor and create a new field i.e concatenated with PK's then in PutHBaseRecord processor Record Reader add the newly created field in the Avro Schema so that you can use the concatenated field as row identifier. row_id //newly created field name concat(/pk1,/pk2) //processor gets pk1,pk2 field values from record and concatenates them and keep as row_id. By using UpdateRecord processor we are going to work on chunks of data and very efficient way of updating the contents of flowfile. For more reference regarding update record processor follow this link.
... View more
04-05-2018
04:59 PM
This is not currently supported, but there is a JIRA for this issue: https://issues.apache.org/jira/browse/NIFI-4487 Part of the issue is that this would only make sense if you are consuming 1 message per flow file, which generally is poor for performance. So what do you do when you consume 10k messages into a single flow file? For ConsumeKafkaRecord then the potentially the timestamp could be put into a field in each record, assuming the schema had a timestamp field, but for regular ConsumeKafka there would be no way to handle it.
... View more
03-13-2018
01:14 PM
1 Kudo
@Faisal Durrani 1. There can be only one NiFi Certificate Authority. The Nifi CA was provided as a means to quickly and easily create certificates for securing a NiFi cluster for testing/evaluation purposes. We do not recommend using the Certificate authority in production environments. In production you should be using a corporately managed certificate authority to sign your servers certificates. The Certificate Authority (CA) is used to sign the certificates generated for every NiFi instance. The public key for the certificate authority is then placed in a truststore.jks file that is used on every NiFi instance while the keystore.jks contains a single PrivateKeyEntry unique to each NiFi host. 2. I am not a Solr guy, so I can not answer authoritatively there. If you have a 3 node ZK cluster setup, that should be fine to support your NiFI cluster. The ZK client is used to communicate with the ZK cluster. So ZK clients would need to be installed on any hosts that will communicate with the ZK cluster (this includes the ZK cluster servers themselves). NiFi does not need a ZK client installed because NiFi includes the ZK client lib inside of the NiFi application itself. It does not affect anything by installing an external ZK client on the same hosts. Thanks, Matt
... View more
02-27-2018
04:43 AM
1 Kudo
Network issues can certainly be a factor. However, you might also want to ensure you use the precise Kafka client for the given Kafka broker version. Since you're on Kafka 0.11 you might want NiFi 1.5.0 or HDF 3.1.0 which has support for that directly in ConsumeKafka_0_11
... View more
02-14-2018
07:10 AM
Thanks, can you kindly let me know how can I
change the retention period of these repositories? (from the nifi
properties file I can see these two properties whose unit are the length of
time. nifi.flow.configuration.archive.max.time=30
days nifi.content.repository.archive.max.retention.period=12
hours )
... View more
02-15-2018
06:22 AM
Is there any way to increase this window from 5 minutes to lets say an hour or more? ( A common question would be how many records were processed during a 24 hour window etc.)
... View more