Member since
09-18-2015
3274
Posts
1159
Kudos Received
426
Solutions
07-04-2016
01:45 PM
2 Kudos
I found this
article "For mobile analytics, Yahoo is in the process of replacing HBase with Druid"
History : 24th Oct, 2012
To test out the setup, I have deployed druid in 2 clusters. first deployment is in my multi node cluster and 2nd deployment is using
this repo.
Details are on this blog
Demo - PS: It's 10 minutes demo
We are loading pageviews and then executing queries. See links at the bottom to download the git and code.
Gif
Download
I use
this to control gif movement
Links:
Page view queries and data
Spin up the environment in your mac or windows "not sure about windows"
Git link . This will spin up Druid, ZK, Hadoop, Postgres
Gist
Happy Hadooping!!!!
... View more
Labels:
07-03-2016
12:23 AM
6 Kudos
"Druid is fast column-oriented distributed data store". Druid is an open source data store designed for OLAP queries on event data. Architecture
Historical nodes are the workhorses that handle storage and querying on "historical" data (non-realtime). Historical nodes download segments from deep storage, respond to the queries from broker nodes about these segments, and return results to the broker nodes. They announce themselves and the segments they are serving in Zookeeper, and also use Zookeeper to monitor for signals to load or drop new segments. Coordinator nodes monitor the grouping of historical nodes to ensure that data is available, replicated and in a generally "optimal" configuration. They do this by reading segment metadata information from metadata storage to determine what segments should be loaded in the cluster, using Zookeeper to determine what Historical nodes exist, and creating Zookeeper entries to tell Historical nodes to load and drop new segments. Broker nodes receive queries from external clients and forward those queries toRealtime and Historical nodes. When Broker nodes receive results, they merge these results and return them to the caller. For knowing topology, Broker nodes use Zookeeper to determine what Realtime and Historical nodes exist. Indexing Service nodes form a cluster of workers to load batch and real-time data into the system as well as allow for alterations to the data stored in the system. Realtime nodes also load real-time data into the system. They are simpler to set up than the indexing service, at the cost of several limitations for production use. Segments are stored in deep storage. You can use S3, HDFS or local mount. Queries are going from client to broker to Realtime or Historical nodes. LAMBDA Architecture Dependencies Indexing service - Source ZK, Storage and Metadata
A running ZooKeeper cluster for cluster service discovery and maintenance of current data topology A metadata storage instance for maintenance of metadata about the data segments that should be served by the system A "deep storage" LOB store/file system to hold the stored segments Source Part 2 - Demo Druid and HDFS as deep storage.
... View more
Labels:
05-20-2016
03:55 AM
2 Kudos
Hive: Apache Hive is a data warehouse infrastructure built on top of Hadoop for providing data summarization, query, and analysis.
HBase: Apache HBase™ is the Hadoop database, a distributed, scalable, big data store
Hawq: http://hawq.incubator.apache.org/
PXF: PXF is an extensible framework that allows HAWQ to query external system data
Let's learn Query federation
This topic describes how to access Hive data using PXF. Link
Previously, in order to query Hive tables using HAWQ and PXF, you needed to create an external table in PXF that described the target table's Hive metadata. Since HAWQ is now integrated with HCatalog, HAWQ can use metadata stored in HCatalog instead of external tables created for PXF. HCatalog is built on top of the Hive metastore and incorporates Hive's DDL. This provides several advantages:
You do not need to know the table schema of your Hive tables You do not need to manually enter information about Hive table location or format If Hive table metadata changes, HCatalog provides updated metadata. This is in contrast to the use of static external PXF tables to define Hive table metadata for HAWQ.
HAWQ retrieves table metadata from HCatalog using PXF. HAWQ creates in-memory catalog tables from the retrieved metadata. If a table is referenced multiple times in a transaction, HAWQ uses its in-memory metadata to reduce external calls to HCatalog. PXF queries Hive using table metadata that is stored in the HAWQ in-memory catalog tables. Table metadata is dropped at the end of the transaction.
Demo
Tools used
Hive,Hawq,Zeppelin
HBase tables Follow this to create hbase tables perl create_hbase_tables.pl Create table in HAWQ to access HBASE table Note: Port is 51200 not 50070 Links Gist PXF docs Must see this Zeppelin interpreter settings
... View more
Labels:
05-20-2016
03:52 AM
@Ali Bajwa Just created this https://www.linkedin.com/pulse/hawqhdb-hadoop-hive-hbase-neeraj-sabharwal
... View more
05-18-2016
12:07 PM
Chronos is a replacement for cron.
A fault tolerant job scheduler for Mesos which handles dependencies and ISO8601 based schedules
Marathon is a framework for Mesos that is designed to launch long-running applications, and, in Mesosphere, serves as a replacement for a traditional system
In Mesosphere, Chronos compliments Marathon as it provides another way to run applications, according to a schedule or other conditions, such as the completion of another job. It is also capable of scheduling jobs on multiple Mesos slave nodes, and provides statistics about job failures and successes. Source
Install https://mesos.github.io/chronos/docs/ and gist
... View more
05-17-2016
11:31 AM
1 Kudo
Original Post
DC/OS - a new kind of operating system that spans all of the servers in a physical or cloud-based datacenter, and runs on top of any Linux distribution.
Source
Projects
More details https://docs.mesosphere.com/overview/components/
Let's cover Mesos in this post
Frameworks (Application running on mesos) http://mesos.apache.org/documentation/latest/frameworks/
I used http://mesos.apache.org/gettingstarted/ to install Mesos in my local machine. I am launching c++, java and python framework in this demo.
Slide Share http://www.slideshare.net/tomasbart/introduction-to-apache-mesos
... View more
04-15-2016
09:09 AM
4 Kudos
Original Post
Calcite is a highly customizable engine for parsing and planning queries on data in a wide variety of formats. It allows database-like access, and in particular a SQL interface and advanced query optimization, for datanot residing in a traditional database.
Apache Calcite is a dynamic data management framework.
It contains many of the pieces that comprise a typical database management system, but omits some key functions: storage of data, algorithms to process data, and a repository for storing metadata.
Calcite intentionally stays out of the business of storing and processing data. As we shall see, this makes it an excellent choice for mediating between applications and one or more data storage locations and data processing engines. It is also a perfect foundation for building a database: just add data. Source
Tutorial https://calcite.apache.org/docs/tutorial.html
Demo:
Read DEPT and EMPS table
Create a test table based on existing csv example. Read the tutorial link to understand the model.json and schema.
In the demo, you can see that I am running explain plan on the queries and then I used smart.json to change the plan.
Watch the demo and then read the following links
model.json https://calcite.apache.org/docs/tutorial.html#schema-discovery
Query tuning https://calcite.apache.org/docs/tutorial.html#optimizing-queries-using-planner-rules
Calcite https://calcite.apache.org/
This page describes the SQL dialect recognized by Calcite’s default SQL parser.
Adapters
JDBC driver
Calcite is embedded in Drill, Hive and Kylin.
... View more
Labels:
02-15-2016
08:19 PM
1 Kudo
Use case: User want to map ad group hdpadmin using Yarn queue manager view. Environment: HDP 2.3.4 and Ambari 2.2.0 Originial request was made by one of HCC users. Thread link Question : How to assigned capacity scheduler queue based on AD group. Solution/Demo:
[root@phdns02 scripts]# id neeraj
uid=29800018(neeraj) gid=29800018(neeraj) groups=29800018(neeraj),29800017(hdpadmin)
[root@phdns02 scripts]#
... View more
Labels:
03-31-2017
06:28 AM
Hi, does it mean that ranger kafka plugin can not define policy among users, and only among hosts?
... View more
02-15-2016
03:19 AM
1 Kudo
Bug : https://issues.apache.org/jira/browse/AMBARI-14466 [root@phdns01 ~]# ambari-server start Using python /usr/bin/python2 Starting ambari-server Ambari Server running with administrator privileges. Organizing resource files at /var/lib/ambari-server/resources... WARNING: setpgid(31734, 0) failed - [Errno 13] Permission denied Server PID at: /var/run/ambari-server/ambari-server.pid Server out at: /var/log/ambari-server/ambari-server.out Server log at: /var/log/ambari-server/ambari-server.log Waiting for server start.................... Ambari Server 'start' completed successfully. [root@phdns01 ~]# wget https://issues.apache.org/jira/secure/attachment/12779059/AMBARI-14466.patch patch -p1 < AMBARI-14466.patch File to patch: /usr/sbin/ambari_server_main.py Issue resolved.
... View more
Labels: