Support Questions

Find answers, ask questions, and share your expertise

Newbie question about how Hortonworks works with transaction log data, HVR software, and Database Agent version

avatar
New Contributor

Hello, I am new to Hortonworks. My company is in the process of setting up a HW environment, and we are looking at how it integrates with our existing HVR Transaction Log solution. http://www.hvr-software.com/ My manager has tasked me with finding out about how Hortonworks handles transaction-type data, and asked me to find out about the Database Agent version. Neither he or I have much knowledge of Hortonworks, and so any guidance/information would be greatly appreciated! Thanks!

1 ACCEPTED SOLUTION

avatar
Master Guru

You can find information on the included products on the homepage http://hortonworks.com/ then click on products.

In general you might find the following tools interesting:

HBase: A no-sql store that can handle huge datavolumes and is good for user transactions ( a bit like a simpler OLTP database on speed ). The API is a simple put/get/scan API.

Phoenix: A SQL layer on top of HBase. This is a proper bigdata transaction store

Kafka: A Message Queueing like cache often used as the realtime store and buffer in a BigData system.

Flume: A framework to load data into hadoop ( either hdfs or Kafka etc. ) You can for example have one flume agent on each webserver to aggregate web logs

Nifi/Hortonworks Data Flow: Similar to Flume but much more powerful and simply better. The tool to gather data in your enterprise filter, transform, and push it into Hadoop. A bit like a realtime ETL engine

Storm: Realtime analytics typically consumes from Kafka

Spark/Spark Streaming: Spark is an analytical platform and provides a streaming version very similar in usecases to Storm but very different in execution. ( Mini batches, powerful analytics built in ... )

Hive: The OLAP like database in Hadoop. Also provides transactions for streaming inserts but still a bit new. For large analytical queries.

I think you would need to explain a bit what your product is actually supposed to do so we could answer more intelligently.

View solution in original post

3 REPLIES 3

avatar
Expert Contributor

@Steve Kaufman Could you please elaborate what exactly you mean by transaction type data? Do you want to store it in HDFS,Hive or HBase. What kind of processing you want to do on it and how you will be consuming it?

avatar
Master Guru

You can find information on the included products on the homepage http://hortonworks.com/ then click on products.

In general you might find the following tools interesting:

HBase: A no-sql store that can handle huge datavolumes and is good for user transactions ( a bit like a simpler OLTP database on speed ). The API is a simple put/get/scan API.

Phoenix: A SQL layer on top of HBase. This is a proper bigdata transaction store

Kafka: A Message Queueing like cache often used as the realtime store and buffer in a BigData system.

Flume: A framework to load data into hadoop ( either hdfs or Kafka etc. ) You can for example have one flume agent on each webserver to aggregate web logs

Nifi/Hortonworks Data Flow: Similar to Flume but much more powerful and simply better. The tool to gather data in your enterprise filter, transform, and push it into Hadoop. A bit like a realtime ETL engine

Storm: Realtime analytics typically consumes from Kafka

Spark/Spark Streaming: Spark is an analytical platform and provides a streaming version very similar in usecases to Storm but very different in execution. ( Mini batches, powerful analytics built in ... )

Hive: The OLAP like database in Hadoop. Also provides transactions for streaming inserts but still a bit new. For large analytical queries.

I think you would need to explain a bit what your product is actually supposed to do so we could answer more intelligently.

avatar
New Contributor

Thank you both for the great info! @Ajay, I reached out to my manager to get some more background info on the objective of this.