Created 01-27-2017 04:33 PM
Hi All,
I'm new to the Hadoop world and I have a general question about how others are storing data from spark-streaming jobs. I'm working on a concept using Spark streaming to stream data from Kafka and do a streaming ETL job. The job will be processing and storing data in near-real time. In the process I want to persist the data at different stages of the transformation and also to do lookups from other tables. One of the basic examples would be to take the record, check to see if it exists in the data store (which I originally was thinking might be a Hive table) and insert it if it doesn't. I've looked at Hive-Streaming, but I don't see any talk anywhere about spark streaming integration and all of the research I've done about inserting into Hive warns about having many small files created and it causing problems. My question is what are other people doing to store their data from spark-streaming? Should I be using HBase or something else for this instead of Hive. Thanks in advance for your responses.
Created 01-27-2017 05:11 PM
Hbase works for your use case:
1. Need to quickly write streaming data coming in at a high velocity
2. Being able to perform random lookups against the dataset that your are writing to
Created 01-27-2017 05:11 PM
Hbase works for your use case:
1. Need to quickly write streaming data coming in at a high velocity
2. Being able to perform random lookups against the dataset that your are writing to
Created 01-27-2017 09:40 PM
Thank you Binu, I was thinking that was probably the answer, but I was hoping there was a way to get Hive to work for me. Now, off to figure out HBase......