I'm new to the Hadoop world and I have a general question about how others are storing data from spark-streaming jobs. I'm working on a concept using Spark streaming to stream data from Kafka and do a streaming ETL job. The job will be processing and storing data in near-real time. In the process I want to persist the data at different stages of the transformation and also to do lookups from other tables. One of the basic examples would be to take the record, check to see if it exists in the data store (which I originally was thinking might be a Hive table) and insert it if it doesn't. I've looked at Hive-Streaming, but I don't see any talk anywhere about spark streaming integration and all of the research I've done about inserting into Hive warns about having many small files created and it causing problems. My question is what are other people doing to store their data from spark-streaming? Should I be using HBase or something else for this instead of Hive. Thanks in advance for your responses.