Member since
07-31-2019
346
Posts
259
Kudos Received
62
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2775 | 08-22-2018 06:02 PM | |
1629 | 03-26-2018 11:48 AM | |
3982 | 03-15-2018 01:25 PM | |
4974 | 03-01-2018 08:13 PM | |
1383 | 02-20-2018 01:05 PM |
03-27-2018
03:56 PM
8 Kudos
AD (After Druid) In my opinion, Druid creates a new analytic service that I term the real-time EDW. In traditional EDWs the process of getting from the EDW to an optimized OLAP could take hours depending on the size of the EDW. Having a cube take six or more hours was not unusual for a lot of companies. The process also tended to be brittle and needed to be closely monitored. In addition to real-time, Druid also facilitates long-term analytics which effectively provides a Lambda architecture for your data warehouse. Data streams into Druid and is held in-memory for a configurable amount of time. While in-memory the data can be queried and visualized. After a period of time the data is then passed to long-term (historical) storage as segments on HDFS. These segments can also be part of the same visualization as the real-time data. As mentioned previously, all data in Druid contains a timestamp. The other data elements consist of the same properties as traditional EDWs: dimensions and measures. The timestamp simplifies the aggregation and Druid is completely denormalized into a single table. Remember dimensions are descriptions or attributes and measures are always additive numbers. Since this is always true, it is easy for Druid to infer in the data what are dimensional attributes and what are measures. For each timestamp duration Druid can, in real-time, aggregate facts along all dimensional attributes. This makes Druid ideal for topN, timeseries, and group-by with group-by being the least performant. The challenges around Druid and other No-SQL type technologies like MongoDB is in the visualization layer as well as the architectural and storage complexities. Durid stores json data and the json data can be difficult to manage and visualize in your standard tools such as Tableau or PowerBI. This is where the integration between Druid and Hive becomes most useful. There is a three-part series describing the integration: Druid and HIve Part 1 https://hortonworks.com/blog/apache-hive-druid-part-1-3/ Druid and Hive Part 2 https://hortonworks.com/blog/sub-second-analytics-hive-druid/ Druid and Hive Part 3 https://hortonworks.com/blog/connect-tableau-druid-hive/ The integration provides a single pane of glass against real-time pre-aggregated cubes, standard Hive tables, and historical OLAP data. More importantly, the data can be accessed through standard ODBC and JDBC visualization tools as well as managed and secure3d through Ambari, Ranger, and Atlas. Druid provided out-of-the-box lambda architecture for time-series data and, coupled with Hive, we now provide for the flexibility and ease-of-access associated with standard RDBMS’s.
... View more
Labels:
03-27-2018
01:39 PM
8 Kudos
Druid is an OLAP solution for streaming event data as well as OLAP for long-term storage. All Druid data requires a timestamp. Druid’s storage architecture is based off the timestamp similar to how HBase stores by key. Following are some key benefits of Druid: Real-time EDW on event data (time-series) Long-term storage leveraging HDFS High availability Extremely performant querying over large data sets Aggregation and Indexing High-level of data compression Hive integration Druid provides a specific solution for specific problems that could not be handled by any other technology. With that being said, there are instances where Druid may not be a good fit: Data without a timestamp No need for real-time streaming Normalized (transactional) data (no joins in Druid) Small data sets No need for aggregating measures Non-BI queries like Spark or streaming lookups Why Druid? BD (Before Druid) In traditional EDWs data is broken into dimensional tables and fact tables. Dimensions describe an object. For example, a product dimension will have colors, sizes, names, and other descriptors of product. Dimensions are always descriptors of something whether it is a product, store, or something that is part of every EDW, date. In addition to dimensions, EDWs have facts, or measures. Measures are always numbers that can be added. For example, the number 10 can be measure but averages cannot. The reason is that you can add 10 to another number, but adding 2 averages does not make numerical sense. The reason for dimensions and facts is two-fold; firstly, it was a means to denormalize the data and reduce joins. Most EDW’s are architected so that you will not need more than 2 joins to get any answer; secondly, dimensions and facts easily map to business questions (see Agile Data Warehouse Design in the reference section). For example, take the following question: “How many product x were purchased last month in store y”? We can dissect this sentence in the following way. product, month, and store are all dimensions while the question “how many” is the fact or measure. For that single question you can begin building your star schema: Figure 1: Star Schema The fact table will have a single row for each unique product sold in a particular store for particular time frame. The difference between an EDW and OLAP is that an OLAP system will pre-aggregate this answer. Prior to the query you will run a process that anticipates this question and will add up all the sales totals for all the products for all time ranges. This is fundamentally why in traditional EDW development all possible questions needed to be flushed out prior to building the schemas. The questions being asked define how the model is designed. This makes traditional EDW development extremely difficult, prone to errors, and expensive. Interviewing LOBs to find what questions they may ask the system or, more likely, looking at existing reports and trying reproduce the data in an EDW design was only the first step. Once the EDW was built you still had to work on what is called the “semantic layer”. This is the point where you instruct the OLAP tool how to aggregate the data. Tools like SQL Server Analysis Server (SSAS) are complicated tools and require a deep understanding of OLAP concepts. They are based off the Kimball methodology and therefore to some extent require the schema to look as much like a star schema as possible. Figure 2: SSAS In these tools the first thing you needed to do was define hierarchies. The easiest hierarchy to define is date. Date always follows the pattern: year,month,day,hour,seconds. Other hierarchies include geography:country, state, county, city, zip code. Hierarchies are important in OLAP because they describe how the user will drill through the data and how the data will be aggregated at each level of the hierarchy. The semantic layer is also where you define what the analyst will actually see in their visualization tools. For example, exposing and EDW surrogate key would only confuse the analyst. In the hadoop space the semantic layer is handled by vendors and software like Jethrodata, AtScale, Kyvos, and Kylin (open source).
... View more
03-26-2018
11:48 AM
1 Kudo
Hi @Daniela Mohan. 2.6 ships with both Hive 1.2.1 and 2.1.0. Due to packaging and testing cycles HDP will be slightly behind the official Apache release. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/bk_release-notes/content/comp_versions.html. This is in order to provide stability and ioperability with other apache software. In 2.6 you will need to enable Hive interactive (LLAP) to use 2.1.0.
... View more
03-15-2018
01:27 PM
1 Kudo
Additionally the sqoop/merge process is easily automated using Workflow Manager.
... View more
03-15-2018
01:25 PM
1 Kudo
Hi @Timothy Spann the recommended approach is Attunity ---> Kafka --->Nifi---->Hive--->Merge. If you want 100% open source than sqoop the data to a staging area and run merge to get the deltas.
... View more
03-01-2018
08:13 PM
Hi @Data Stocker, the short answer is no. The longer answer is that currently Hive has ACID (transactional) capabilities but no concept like BEGIN...END TRANSACTION. There is work being done to provide this in a future release.
... View more
02-20-2018
01:05 PM
1 Kudo
Hi @Waqar Sher. The sandbox is fully functional representation of a pseudo cluster environment but it does have some additional features built-in which allow it to work specifically in single VM environment. There is no method or reason to deploy the sandbox VM to a cluster. You will want to begin with an Ambari install. The documentation can be found here https://docs.hortonworks.com/HDPDocuments/Ambari/Ambari-2.6.1.3/index.html Hope this helps.
... View more
02-14-2018
04:13 PM
@kishore sanchina LLAP is a long-running service so it will preempt memory for the llap queue. The best practice is to dedicate nodes to LLAP workloads. You can utilize the LLAPContext in Spark which will stream data from HDFS to the spark executor but this is more of a Hive process and not Spark which can incorporate some masking and filtering security features but you may see a 3x-4x performance degradation.
... View more
02-07-2018
12:59 PM
1 Kudo
@Jony Singh HDP on Windows is no longer supported or being developed. As of HDP 2.6.4 you can review the supported OS versions here: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/bk_support-matrices/content/ch01.html
... View more
02-06-2018
06:26 PM
Hi @PJ, the honest truth is there is no good reason not to use ORC format. You can use another format like Parquet but it won't provide ACID, LLAP cache, or the same level of performance. I would say the decision is similar to not using indexes in a relational system or not running statistics. ORC is simply best practice for high performance data warehousing in Hive. Keep in mind that LLAP will allow you to cache raw text files. This may be an option if you have some strict SLA preventing you from incurring the conversion delay of the text file to ORC.
... View more