Member since
07-31-2019
346
Posts
259
Kudos Received
62
Solutions
03-27-2018
03:56 PM
8 Kudos
AD (After Druid) In my opinion, Druid creates a new analytic service that I term the real-time EDW. In traditional EDWs the process of getting from the EDW to an optimized OLAP could take hours depending on the size of the EDW. Having a cube take six or more hours was not unusual for a lot of companies. The process also tended to be brittle and needed to be closely monitored. In addition to real-time, Druid also facilitates long-term analytics which effectively provides a Lambda architecture for your data warehouse. Data streams into Druid and is held in-memory for a configurable amount of time. While in-memory the data can be queried and visualized. After a period of time the data is then passed to long-term (historical) storage as segments on HDFS. These segments can also be part of the same visualization as the real-time data. As mentioned previously, all data in Druid contains a timestamp. The other data elements consist of the same properties as traditional EDWs: dimensions and measures. The timestamp simplifies the aggregation and Druid is completely denormalized into a single table. Remember dimensions are descriptions or attributes and measures are always additive numbers. Since this is always true, it is easy for Druid to infer in the data what are dimensional attributes and what are measures. For each timestamp duration Druid can, in real-time, aggregate facts along all dimensional attributes. This makes Druid ideal for topN, timeseries, and group-by with group-by being the least performant. The challenges around Druid and other No-SQL type technologies like MongoDB is in the visualization layer as well as the architectural and storage complexities. Durid stores json data and the json data can be difficult to manage and visualize in your standard tools such as Tableau or PowerBI. This is where the integration between Druid and Hive becomes most useful. There is a three-part series describing the integration: Druid and HIve Part 1 https://hortonworks.com/blog/apache-hive-druid-part-1-3/ Druid and Hive Part 2 https://hortonworks.com/blog/sub-second-analytics-hive-druid/ Druid and Hive Part 3 https://hortonworks.com/blog/connect-tableau-druid-hive/ The integration provides a single pane of glass against real-time pre-aggregated cubes, standard Hive tables, and historical OLAP data. More importantly, the data can be accessed through standard ODBC and JDBC visualization tools as well as managed and secure3d through Ambari, Ranger, and Atlas. Druid provided out-of-the-box lambda architecture for time-series data and, coupled with Hive, we now provide for the flexibility and ease-of-access associated with standard RDBMS’s.
... View more
Labels:
03-27-2018
01:39 PM
8 Kudos
Druid is an OLAP solution for streaming event data as well as OLAP for long-term storage. All Druid data requires a timestamp. Druid’s storage architecture is based off the timestamp similar to how HBase stores by key. Following are some key benefits of Druid: Real-time EDW on event data (time-series) Long-term storage leveraging HDFS High availability Extremely performant querying over large data sets Aggregation and Indexing High-level of data compression Hive integration Druid provides a specific solution for specific problems that could not be handled by any other technology. With that being said, there are instances where Druid may not be a good fit: Data without a timestamp No need for real-time streaming Normalized (transactional) data (no joins in Druid) Small data sets No need for aggregating measures Non-BI queries like Spark or streaming lookups Why Druid? BD (Before Druid) In traditional EDWs data is broken into dimensional tables and fact tables. Dimensions describe an object. For example, a product dimension will have colors, sizes, names, and other descriptors of product. Dimensions are always descriptors of something whether it is a product, store, or something that is part of every EDW, date. In addition to dimensions, EDWs have facts, or measures. Measures are always numbers that can be added. For example, the number 10 can be measure but averages cannot. The reason is that you can add 10 to another number, but adding 2 averages does not make numerical sense. The reason for dimensions and facts is two-fold; firstly, it was a means to denormalize the data and reduce joins. Most EDW’s are architected so that you will not need more than 2 joins to get any answer; secondly, dimensions and facts easily map to business questions (see Agile Data Warehouse Design in the reference section). For example, take the following question: “How many product x were purchased last month in store y”? We can dissect this sentence in the following way. product, month, and store are all dimensions while the question “how many” is the fact or measure. For that single question you can begin building your star schema: Figure 1: Star Schema The fact table will have a single row for each unique product sold in a particular store for particular time frame. The difference between an EDW and OLAP is that an OLAP system will pre-aggregate this answer. Prior to the query you will run a process that anticipates this question and will add up all the sales totals for all the products for all time ranges. This is fundamentally why in traditional EDW development all possible questions needed to be flushed out prior to building the schemas. The questions being asked define how the model is designed. This makes traditional EDW development extremely difficult, prone to errors, and expensive. Interviewing LOBs to find what questions they may ask the system or, more likely, looking at existing reports and trying reproduce the data in an EDW design was only the first step. Once the EDW was built you still had to work on what is called the “semantic layer”. This is the point where you instruct the OLAP tool how to aggregate the data. Tools like SQL Server Analysis Server (SSAS) are complicated tools and require a deep understanding of OLAP concepts. They are based off the Kimball methodology and therefore to some extent require the schema to look as much like a star schema as possible. Figure 2: SSAS In these tools the first thing you needed to do was define hierarchies. The easiest hierarchy to define is date. Date always follows the pattern: year,month,day,hour,seconds. Other hierarchies include geography:country, state, county, city, zip code. Hierarchies are important in OLAP because they describe how the user will drill through the data and how the data will be aggregated at each level of the hierarchy. The semantic layer is also where you define what the analyst will actually see in their visualization tools. For example, exposing and EDW surrogate key would only confuse the analyst. In the hadoop space the semantic layer is handled by vendors and software like Jethrodata, AtScale, Kyvos, and Kylin (open source).
... View more
08-14-2017
02:43 PM
5 Kudos
Many organizations still ask the
question, “Can I run BI (Business Intelligence) workloads on Hadoop?” These workloads range from short, low-latency
ad-hoc queries to canned or operational reporting. The primary concerns center around user
experience. Will a query take too long
to return an answer? How quickly can I
change my mind with a report and drill down other dimensional attributes? For
almost 20 years vendors have engineered highly customized solutions to solve
these problems. Many times these
solutions require fine-tuned appliances that tightly integrate hardware and
software in order to squeeze out every last drop of performance. The challenges with these solutions
are mainly around cost and maintenance. These solutions become cost-prohibitive
at scale and require large teams to manage and operate. The ideal solution is
one that affordably scales but retains the same performance advantages as your
appliance. Your analysts should not see the difference between the costly
appliance and the more affordable solution. Hadoop is the solution and this
article aims to dispel the myth that BI workloads cannot run on Hadoop by
pointing to the solution components. When I talk to customers the first
thing they say when asking about SQL workloads on Hadoop is Hive is slow. This is largely to do with both competitors
FUD as well the history of Hive. Hive
grew up as a batch SQL engine because the early use cases where only concerned with
providing SQL access to MapReduce so that users would not need to know Java. Hive was seen as a way to increase the use of
a cluster over a larger user base. It
really wasn’t until the Hortonworks Stinger initiative
that a serious effort was made to make Hive into a faster query tool. The two main focuses of the Stinger effort
was around file format (ORC) and moving away from MapReduce to Tez. To be
clear, no one runs Hive on MapReduce anymore. If you are, you are doing it
wrong. Also, if
you are running Hive queries against CSV files or other formats then you are
also doing it wrong. Here is a great primer
to bookmark and make sure anyone working on Hive in your organization reads. Tez certainly did not alleviate the
confusion. Tez got Hive in the race but not across the finish line. Tez provided Hive with a more interactive
querying experience over large sets of data but what it did not provide is good
query performance for the typical ad-hoc, drilldown type querying we see in
most BI reporting. Do to the manner in which Tez and YARN spin up
and down containers and how containers are allocated on a per job basis, there
were limiting performance factors as well as concurrency issues. Hortonworks created LLAP
to solve these problems. Many customers
are confused by LLAP because they think it is a replacement for Hive. A better way to think about it is to look at
Hive as the query tool (the tool allowing you to use SQL language) and LLAP as
the resource manager for your query execution.
For the business user to use LLAP they do not need to change anything. You simply connect to the Hiveserver2
instance (you can use ODBC,
JDBC,
or the Hive
View) that has LLAP
enabled and you are on your way. The primary design purpose for LLAP
was to provide fast performance for ad-hoc querying over semi-large datasets
(1TB-10TB) using standard BI tools such as Tableau, Excel, Microstrategy, or
PowerBI. In addition to performance,
because of the manner in which LLAP manages memory and utilizes Slider, LLAP
also provides for a high level of concurrency without the cost of container
startups. In summary, you can run ad-hoc
queries today on HDP by using Hive with LLAP: Geisinger
Teradata offload https://www.youtube.com/watch?v=UzgsczrdWbg Comcast SQL
benchmarks https://www.youtube.com/watch?v=dS1Ke-_hJV0 Your company can now begin
offloading workloads from your appliances and running those same queries on
HDP. In the next articles I will address
the other components for BI workloads: ANSI compliance and OLAP. For more information around Hive, feel free to
checkout the following book: https://github.com/Apress/practical-hive
... View more
Labels:
05-11-2017
03:02 PM
Hi Scott, Below is the error I am getting on when I am trying to perform ODBC data connection. "UNABLE TO CONNECT" Encountered an error while trying to connect to ODBC Details: "ODBC: ERROR [HY000] [Hortonworks][Hardy] (34) Error from server: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (Internal credentials cache error).
ERROR [HY000] [Hortonworks][Hardy] (34) Error from server: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (Internal credentials cache error)." I am able to sucssefully test the connection from Hortonworks Hive ODBC Driver DSN Setup Thanks for any help
:)
... View more
02-15-2017
11:11 AM
Thank you @Ali Bajwa for good tutoral. I am trying this example with a difference, My nifi is local and I try to put tweets in a remote Solr. Solr is in a VM that contains Hortonworks sandbox. Unfortunately I am getting this error on PutSolrContentStream processor: PutSolrContentStream[id=f6327477-fb7d-4af0-ec32-afcdb184e545] Failed to send StandardFlowFileRecord[uuid=9bc39142-c02c-4fa2-a911-9a9572e885d0,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1487148463852-14, container=default, section=14], offset=696096, length=2589],offset=0,name=103056151325300.json,size=2589] to Solr due to org.apache.solr.client.solrj.SolrServerException: IOException occured when talking to server at: http://172.17.0.2:8983/solr/tweets_shard1_replica1; routing to connection_failure: org.apache.solr.client.solrj.SolrServerException: IOException occured when talking to server at: http://172.17.0.2:8983/solr/tweets_shard1_replica1; Could you help me? thanks, Shanghoosh
... View more