Member since
04-19-2023
22
Posts
2
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
927 | 05-08-2023 12:30 AM | |
2070 | 04-20-2023 07:31 AM |
09-28-2023
06:08 AM
Can nifi authenticate it self in an external zookeeper using login and password? I found in the documentation only using Kerberos and ldap, and without this the usual sasl anti-fiction to zukipir, as for example, Kafka is authenticated in zookeeper from zookeeper.jaas config. Where and how is this implemented? or should the zookeeper be open and visible from the outside?
... View more
Labels:
- Labels:
-
Apache NiFi
-
Apache Zookeeper
09-27-2023
06:45 AM
I deployed nifi on a cluster of 10 servers, I have 5 external zukipirs which are successfully used by Kafka on the same 10 servers, after starting the nifi.service process I see the error The Flow Controller is initializing the Data Flow in the web version and it doesn’t go further than this message I in the web version, Data Flow doesn’t go anywhere. and in nifi-app.log 2023-09-27 16:37:11,645 INFO [main] o.a.n.c.p.AbstractNodeProtocolSender Cluster Coordinator is located at sd-sagn-rtyev:9082. Will send Cluster Connection Request to this address 2023-09-27 16:37:11,665 WARN [main] o.a.nifi.controller.StandardFlowService Failed to connect to cluster due to: org.apache.nifi.cluster.protocol.ProtocolException: Failed marshalling 'CONNECTION_REQUEST' protocol message due to: java.net.SocketException: Broken pipe (Write failed) help me!
... View more
Labels:
- Labels:
-
Apache Kafka
-
Apache NiFi
-
Apache Zookeeper
06-13-2023
01:32 AM
Who tried the stream listHDFS---FetchHDFS--invokeHTTP I want to send a file from hdfs parquet to the database table, I encounter a problem at the time of loading into the database I use a syntax error, maybe I'm doing something wrong, please help with the parameters for the database
... View more
Labels:
- Labels:
-
Apache NiFi
05-16-2023
03:01 AM
I have two processes 1. consumerkafkarecord --mergerecord --puthdfs 2. consumerkafkarecord --mergecontent--puthdfs when I use process 1, I have files in ndfs on the output readable spark, database, python libraries without problems, but the files are not larger than 200mb, all different sizes, although 500mb is set, but it is not filled when I use process 2 with the same parameters in mb and the number of lines, the files I get are exactly 500mb, but these files do not open, not by spark, not by any database, not by python libraries question why? I also want a large file always 500mb and so that it can be read without problems as in process 1 mergecontent settings Merge Strategy Bin-Packing Algorithm Merge Format Binary Concatenation Attribute Strategy Keep Only Common Attributes Correlation Attribute Name No value set Minimum Number of Entries 10000 Maximum Number of Entries 1000000 Minimum Group Size 100 MB Maximum Group Size 500 MB Max Bin Age No value set Maximum number of Bins 10 Delimiter Strategy Text Header No value set Footer No value set Demarcator \n mergerecord settings Record Reader JsonTreeReader Record Writer ParquetRecordSetWriter Merge Strategy Bin-Packing Algorithm Correlation Attribute Name No value set Attribute Strategy Keep Only Common Attributes Minimum Number of Records 10000 Maximum Number of Records 1000000 Minimum Bin Size 100 MB Maximum Bin Size 500 MB Max Bin Age No value set Maximum Number of Bins 10
... View more
Labels:
- Labels:
-
Apache NiFi
05-08-2023
12:30 AM
1 Kudo
problem solution due to java spec keytool -delete -alias RCA-CA -keystore /usr/lib/jvm/java-11-openjdk-amd64/lib/security/cacerts -storepass changeit -noprompt keytool -import -alias RCA-CA -keystore /usr/lib/jvm/java-11-openjdk-amd64/lib/security/cacerts -file /etc/pki/ca.crt -storepass changeit -noprompt
... View more
05-04-2023
06:54 AM
I am running a cluster with tls what commands I used to create tls, perhaps something was not indicated in them:. I have crt and key my company openssl pkcs12 -export -in /etc/nifi/certs/wibe.t.crt -inkey /etc/nifi/certs/wibe.t.key -out /etc/nifi/certs/pkcs12_file.p12 -name nifi_alias -CAfile /etc/pki/CA.pem -caname root -password pass:qwerty231 keytool -importkeystore -deststorepass "keystore_password" -destkeypass "keystore_password" -destkeystore /etc/nifi/certs/nifi_keystore.jks -srckeystore pkcs12_file.p12 -srcstoretype PKCS12 -srcstorepass "pkcs12_password" -alias nifi_alias keytool -import -trustcacerts -alias root -file /etc/pki/CA.pem -noprompt -keystore /etc/nifi/certs/nifi_truststore.jks -storepass "truststore_password" at first I could not collect a quorum, then this problem went away and the cluster chose a leader, work is done with external zookeepers on three servers no errors, only warning 2023-05-04 16:06:34,116 WARN [main] o.a.nifi.controller.StandardFlowService Failed to connect to cluster due to: org.apache.nifi.cluster.protocol.ProtocolException: Failed marshalling 'CONNECTION_REQUEST' protocol message due to: java.net.SocketException: Broken pipe (Write failed) 2023-05-04 16:06:39,119 INFO [main] o.a.n.c.c.n.LeaderElectionNodeProtocolSender Determined that Cluster Coordinator is located at 10.1.4.2:9082; will use this address for sending heartbeat messages 2023-05-04 16:06:39,119 INFO [main] o.a.n.c.p.AbstractNodeProtocolSender Cluster Coordinator is located at 10.1.4.2:9082. Will send Cluster Connection Request to this address 2023-05-04 16:06:39,236 WARN [main] o.a.nifi.controller.StandardFlowService Failed to connect to cluster due to: org.apache.nifi.cluster.protocol.ProtocolException: Failed marshalling 'CONNECTION_REQUEST' protocol message due to: java.net.SocketException: Broken pipe (Write failed) 2023-05-04 16:06:44,239 INFO [main] o.a.n.c.c.n.LeaderElectionNodeProtocolSender Determined that Cluster Coordinator is located at 10.1.4.2:9082; will use this address for sending heartbeat messages 2023-05-04 16:06:44,240 INFO [main] o.a.n.c.p.AbstractNodeProtocolSender Cluster Coordinator is located at 10.1.4.2:9082. Will send Cluster Connection Request to this address 2023-05-04 16:06:44,373 WARN [Process Cluster Protocol Request-19] o.a.n.c.p.impl.SocketProtocolListener Failed processing protocol message from 10.1.4.2 due to Extended key usage does not permit use for TLS client authentication the web version does not start, such an error hangs and does not go further The Flow Controller is initializing the Data Flow. statute of nifi and zookeeper on all servers zookeepe asset in quorum
... View more
Labels:
- Labels:
-
Apache NiFi
-
Apache Zookeeper
05-02-2023
11:18 PM
How compression works in nifi I have a consumercafcarerecord(jsontreereader+parquetrecordsetwriter)--mergecontent --putndfs(none compression) at my output, parquet files are read by spark, the problem is that the output from json is not large x2, but when I do snappy compression, the file namefile.parquet.snappy shrinks x5x6 at the output but the file cannot be opened by spark or it opens and the structure is no longer as good as it was at step 1 how to get the same structure from step 1 and x5x6 compression?
... View more
Labels:
- Labels:
-
Apache Kafka
-
Apache NiFi
04-28-2023
05:05 AM
another interesting point is how to implement on updateattribute the function of checking whether it was delivered to puthdfs because if hdfs runs out locally, the process continues to go and the files are not written but are thrown out of the queue and go to another file in the basket, in fact, files are lost if the meso runs out and the chain does not stop , you need to check if the file in ndfs did not arrive stop the stream or the memory ran out in ndfs stop putndfs and let the recycle bin fill up
... View more
04-28-2023
03:17 AM
is it possible to make three processes per kafka and so that the output data is not repeated?
... View more
04-27-2023
10:20 PM
yes, it helped me, it's a pity that there is no built-in functionality for puthdfs
... View more