Community Articles

Find and share helpful community-sourced technical articles.
Announcements
Celebrating as our community reaches 100,000 members! Thank you!
Labels (1)
avatar

Many organizations still ask the question, “Can I run BI (Business Intelligence) workloads on Hadoop?” These workloads range from short, low-latency ad-hoc queries to canned or operational reporting. The primary concerns center around user experience. Will a query take too long to return an answer? How quickly can I change my mind with a report and drill down other dimensional attributes? For almost 20 years vendors have engineered highly customized solutions to solve these problems. Many times these solutions require fine-tuned appliances that tightly integrate hardware and software in order to squeeze out every last drop of performance.

The challenges with these solutions are mainly around cost and maintenance. These solutions become cost-prohibitive at scale and require large teams to manage and operate. The ideal solution is one that affordably scales but retains the same performance advantages as your appliance. Your analysts should not see the difference between the costly appliance and the more affordable solution. Hadoop is the solution and this article aims to dispel the myth that BI workloads cannot run on Hadoop by pointing to the solution components.

When I talk to customers the first thing they say when asking about SQL workloads on Hadoop is Hive is slow. This is largely to do with both competitors FUD as well the history of Hive. Hive grew up as a batch SQL engine because the early use cases where only concerned with providing SQL access to MapReduce so that users would not need to know Java. Hive was seen as a way to increase the use of a cluster over a larger user base. It really wasn’t until the Hortonworks Stinger initiative that a serious effort was made to make Hive into a faster query tool. The two main focuses of the Stinger effort was around file format (ORC) and moving away from MapReduce to Tez. To be clear, no one runs Hive on MapReduce anymore. If you are, you are doing it wrong. Also, if you are running Hive queries against CSV files or other formats then you are also doing it wrong. Here is a great primer to bookmark and make sure anyone working on Hive in your organization reads.

Tez certainly did not alleviate the confusion. Tez got Hive in the race but not across the finish line. Tez provided Hive with a more interactive querying experience over large sets of data but what it did not provide is good query performance for the typical ad-hoc, drilldown type querying we see in most BI reporting. Do to the manner in which Tez and YARN spin up and down containers and how containers are allocated on a per job basis, there were limiting performance factors as well as concurrency issues.

Hortonworks created LLAP to solve these problems. Many customers are confused by LLAP because they think it is a replacement for Hive. A better way to think about it is to look at Hive as the query tool (the tool allowing you to use SQL language) and LLAP as the resource manager for your query execution. For the business user to use LLAP they do not need to change anything. You simply connect to the Hiveserver2 instance (you can use ODBC, JDBC, or the Hive View) that has LLAP enabled and you are on your way.

The primary design purpose for LLAP was to provide fast performance for ad-hoc querying over semi-large datasets (1TB-10TB) using standard BI tools such as Tableau, Excel, Microstrategy, or PowerBI. In addition to performance, because of the manner in which LLAP manages memory and utilizes Slider, LLAP also provides for a high level of concurrency without the cost of container startups.

In summary, you can run ad-hoc queries today on HDP by using Hive with LLAP:

Geisinger Teradata offload https://www.youtube.com/watch?v=UzgsczrdWbg

Comcast SQL benchmarks https://www.youtube.com/watch?v=dS1Ke-_hJV0

Your company can now begin offloading workloads from your appliances and running those same queries on HDP. In the next articles I will address the other components for BI workloads: ANSI compliance and OLAP. For more information around Hive, feel free to checkout the following book: https://github.com/Apress/practical-hive

1,845 Views