We are looking for some kind of utility or tool to read the data from HDFS and place it in the kafka topic. Appreciate your inputs.
From the community section, we came across this "You could use Apache NiFi with a ListHDFS + FetchHDFS processor followed by PublishKafka"...Can you provide more insight how this can be acheived
We already have the data in HDFS and we want to pull the data from HDFS and put in in kafka topic.
So, we are looking for source connector here in pulling the data from HDFS and placing in kafka.
Hello @sriven ,
- As @Daming Xue mentioned Kafka Connect is one of the good options, the doc https://docs.cloudera.com/cdp-private-cloud-base/7.1.6/kafka-connect/kafka-connect.pdf shares an example of HDFS as a sink connector.
- Flume (CDH)
- Kafka- Hive Integeration (https://docs.cloudera.com/cdp-private-cloud-base/7.1.5/integrating-hive-and-bi/topics/hive-kafka-int...)
- To try out quickly (testing purpose), you can use the console producer
hadoop fs -cat file.txt | kafka-console-producer --broker-list <host:port> --topic <topic>
- Spark (which you do not want)
These are some I could quickly think of, there must be many more options.
Thanks & Regards,
P.S. If you found this answer useful please upvote/accept.
We have requirement like spark program are writing the files into HDFS.
So we want to read those files and send to kaka.
We know HDFS sink connector is useful for writing to HDFS as well and HDFS source connector is useful when the files written by hdfs sink connector.
HDFS source connector is also not the solution if files written by spark programming.
Please let us know if there is any solutions for this requirement?
What is the file format?
Why is it that you say HDFS source connector is also not the solution if files written by spark programming.?
Spark - HDFS - Kafka is your entire flow correct?
Spark to HDFS they have done now you are looking for HDFS - Kafka.
If you can help me understand the file format that Spark saves it while I can find if HDFS Source connector should not be able to help your usecase.
Please let me know if it helps.
Thanks & Regards,
How to read parquet files using Kconnect.?
In simple,We just want to read the parquet files on HDFS using kconnect and without spark jobs?
Please let us know if there is a solution or not?
As you know,
We have limitation with source kafka connector that it works for HDFS objects/files created only by the HDFS 2 Sink Connector for Confluent Platform
and how we can pull the files if created by other spark,mapreduce or any other jobs on HDFS?
The use case of HDFS source connector is only to mirror the same data on kafka.