Member since
03-01-2020
15
Posts
0
Kudos Received
0
Solutions
02-17-2023
11:57 PM
Hi , I wish to implement kafka exporter on my cloudera environment, this, in order to save and present the required data on Promethus Kafka exporter: On my cloudera kafka both Kerberos and TLS are enabled. I have tried to implement the following kafka export https://github.com/danielqsj/kafka_exporter but i got a little lost with the configuration and you don't actually have logs. Is anyone might implement such an exporter that supports both TLS and Kerberos and can share the
... View more
Labels:
- Labels:
-
Apache Kafka
01-28-2022
12:54 AM
Hello, I'm a learner & i would like to use the method you made mentioned here to collect logs in a remote server & send to Nifi. Please, can you put me through because i have been battling with how to build a msi before the real implementation. Thank you so much.
... View more
10-21-2021
10:30 AM
@dzbeda, Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
... View more
04-18-2021
08:09 PM
Hello You may refer to this thread for the monitoring options for NiFi https://community.cloudera.com/t5/Support-Questions/NIFI-Monitoring-processor-and-nifi-Service/td-p/128485
... View more
03-24-2021
06:09 AM
@dzbeda Have you tried specifying the index in your configured Query within the GetSplunk NiFi processor component? NiFi is not going to be able to provide you a list of indexes from your Splunk to choose from, you would need to know what indexes exist in your Splunk server. Hope this helps, Matt
... View more
12-09-2020
10:16 AM
Look at everything that has *Record in the name for anything like CSV, JSON, Parquet, AVRO, Logs. https://www.datainmotion.dev/2020/07/ingesting-all-weather-data-with-apache.html
... View more
12-07-2020
11:28 AM
I recommend not using nifi and working with the console. Using nifi is not recommended because additional logs are generated. It is recommended to divide and compress the files that you want to move from the console into appropriate sizes and send them. For HDFS, you can use the distcp command. https://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html
... View more
12-06-2020
10:50 PM
1 Kudo
@dzbeda Try it with: /*:Event/*:System/*:Channel
... View more
12-01-2020
06:53 AM
1 Kudo
@dzbeda Seeing "localhost" in your shared log output leads to what may be the issue. When you configure the URL in the Remote Process Group (RPG), it tells that NiFI RPG to communicate with that URL to fetch the Site-To-Site (S2S) details. Included in those returned details are things like: - number of nodes in target NiFi cluster (if standalone returns only one host) - The hostnames of those node(s) (in this case it looks like maybe localhost is being returned - Configured RAW port if configured - Whether HTTP transport protocol is enabled - etc... So when you RPG actually tries to send FlowFiles over S2S it is trying to send to localhost which results in itself rather than the actual target linux NiFi it fetched the S2S details from. When some properties are left unconfigured, NiFi returns whatever the OS resolves. I am going to guess your linux server is returning localhost rather than the actual hostname. You will want to verify your S2S configuration setup in the target NiFi (linux server): http://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#site_to_site_properties Try setting the "nifi.remote.input.host" to see if that helps you. Hope this helps, Matt
... View more