- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
When to go with ETL on Hive using Tez VS When to go with Spark ETL ?
- Labels:
-
Apache Hive
-
Apache Spark
-
Apache Tez
Created ‎06-20-2016 07:54 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Created ‎06-20-2016 04:00 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@revan
Apache Hive Strengths:
The Apache Hive facilitates querying and managing large datasets residing in distributed storage. Built on top of Apache Hadoop, it provides:
- Tools to enable easy data extract/transform/load (ETL)
- A mechanism to impose structure on a variety of data formats
- Access to files stored either directly in Apache HDFS or in other data storage systems such as Apache HBase Query execution via MapReduce
- Hive defines a simple SQL-like query language, called QL, that enables users familiar with SQL to query the data. At the same time, this language also allows programmers who are familiar with the MapReduce framework to be able to plug in their custom mappers and reducers to perform more sophisticated analysis that may not be supported by the built-in capabilities of the language.
- QL can also be extended with custom scalar functions (UDF's), aggregations (UDAF's), and table functions (UDTF's).
- Indexing to provide acceleration, index type including compaction and Bitmap index as of 0.10.
- Different storage types such as plain text, RCFile, HBase, ORC, and others.
- Metadata storage in an RDBMS, significantly reducing the time to perform semantic checks during query execution.
- Operating on compressed data stored into the Hadoop ecosystem using algorithms including DEFLATE, BWT, snappy, etc.
- Built-in user defined functions (UDFs) to manipulate dates, strings, and other data-mining tools. Hive supports extending the UDF set to handle use-cases not supported by built-in functions.
- SQL-like queries (HiveQL), which are implicitly converted into MapReduce, or Spark jobs.
Apache Spark Strengths:
Spark SQL has multiple interesting features:
- it supports multiple file formats such as Parquet, Avro, Text, JSON, ORC
- it supports data stored in HDFS, Apache HBase, Cassandra and Amazon S3
- it supports classical Hadoop codecs such as snappy, lzo, gzip
- it provides security through authentification via the use of a "shared secret" (spark.authenticate=true on YARN, or spark.authenticate.secret on all nodes if not YARN)
- encryption, Spark supports SSL for Akka and HTTP protocols
- it supports UDFs
- it supports concurrent queries and manages the allocation of memory to the jobs (it is possible to specify the storage of RDD like in-memory only, disk only or memory and disk
- it supports caching data in memory using a SchemaRDD columnar format (cacheTable(““))exposing ByteBuffer, it can also use memory-only caching exposing User object
- it supports nested structures
When to use Spark or Hive-
- Hive is still a great choice when low latency/multiuser support is not a requirement, such as for batch processing/ETL. Hive-on-Spark will narrow the time windows needed for such processing, but not to an extent that makes Hive suitable for BI
- Spark SQL, lets Spark users selectively use SQL constructs when writing Spark pipelines. It is not intended to be a general-purpose SQL layer for interactive/exploratory analysis. However, Spark SQL reuses the Hive frontend and metastore, giving you full compatibility with existing Hive data, queries, and UDFs. Spark SQL includes a cost-based optimizer, columnar storage and code generation to make queries fast. At the same time, it scales to thousands of nodes and multi hour queries using the Spark engine, which provides full mid-query fault tolerance. The performance is biggest advantage of Spark SQL.
Created ‎06-20-2016 04:00 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@revan
Apache Hive Strengths:
The Apache Hive facilitates querying and managing large datasets residing in distributed storage. Built on top of Apache Hadoop, it provides:
- Tools to enable easy data extract/transform/load (ETL)
- A mechanism to impose structure on a variety of data formats
- Access to files stored either directly in Apache HDFS or in other data storage systems such as Apache HBase Query execution via MapReduce
- Hive defines a simple SQL-like query language, called QL, that enables users familiar with SQL to query the data. At the same time, this language also allows programmers who are familiar with the MapReduce framework to be able to plug in their custom mappers and reducers to perform more sophisticated analysis that may not be supported by the built-in capabilities of the language.
- QL can also be extended with custom scalar functions (UDF's), aggregations (UDAF's), and table functions (UDTF's).
- Indexing to provide acceleration, index type including compaction and Bitmap index as of 0.10.
- Different storage types such as plain text, RCFile, HBase, ORC, and others.
- Metadata storage in an RDBMS, significantly reducing the time to perform semantic checks during query execution.
- Operating on compressed data stored into the Hadoop ecosystem using algorithms including DEFLATE, BWT, snappy, etc.
- Built-in user defined functions (UDFs) to manipulate dates, strings, and other data-mining tools. Hive supports extending the UDF set to handle use-cases not supported by built-in functions.
- SQL-like queries (HiveQL), which are implicitly converted into MapReduce, or Spark jobs.
Apache Spark Strengths:
Spark SQL has multiple interesting features:
- it supports multiple file formats such as Parquet, Avro, Text, JSON, ORC
- it supports data stored in HDFS, Apache HBase, Cassandra and Amazon S3
- it supports classical Hadoop codecs such as snappy, lzo, gzip
- it provides security through authentification via the use of a "shared secret" (spark.authenticate=true on YARN, or spark.authenticate.secret on all nodes if not YARN)
- encryption, Spark supports SSL for Akka and HTTP protocols
- it supports UDFs
- it supports concurrent queries and manages the allocation of memory to the jobs (it is possible to specify the storage of RDD like in-memory only, disk only or memory and disk
- it supports caching data in memory using a SchemaRDD columnar format (cacheTable(““))exposing ByteBuffer, it can also use memory-only caching exposing User object
- it supports nested structures
When to use Spark or Hive-
- Hive is still a great choice when low latency/multiuser support is not a requirement, such as for batch processing/ETL. Hive-on-Spark will narrow the time windows needed for such processing, but not to an extent that makes Hive suitable for BI
- Spark SQL, lets Spark users selectively use SQL constructs when writing Spark pipelines. It is not intended to be a general-purpose SQL layer for interactive/exploratory analysis. However, Spark SQL reuses the Hive frontend and metastore, giving you full compatibility with existing Hive data, queries, and UDFs. Spark SQL includes a cost-based optimizer, columnar storage and code generation to make queries fast. At the same time, it scales to thousands of nodes and multi hour queries using the Spark engine, which provides full mid-query fault tolerance. The performance is biggest advantage of Spark SQL.
Created ‎06-20-2016 04:15 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I'd say whenever you need some Spark specific features like ML, GraphX or Streaming - use spark as ETL engine since it provides All-in-one solution for most usecases.
If you have no such requirements - use Hive on TEZ
If you have no TEZ - use Hive on MR
In any case Hive acts just like a metastore..
Created ‎07-29-2020 01:15 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Apache Hive Strengths:
The Apache Hive encourages questioning and overseeing huge datasets living in circulated capacity. Based on head of Apache Hadoop, it gives:
Tools to empower simple data separate/change/load (ETL)
A system to force structure on an assortment of data positions
Access to documents put away either legitimately in Apache HDFS or in other data stockpiling frameworks, for example, Apache HBase Query execution by means of MapReduce
Hive characterizes a straightforward SQL-like inquiry language, called QL, that empowers clients acquainted with SQL to question the data. Simultaneously, this language additionally permits developers who know about the MapReduce system to have the option to connect their custom mappers and reducers to perform increasingly modern investigation that may not be bolstered by the inherent capacities of the language.
QL can likewise be stretched out with custom scalar capacities (UDF's), accumulations (UDAF's), and table capacities (UDTF's).
Ordering to give quickening, list type including compaction and Bitmap file as of 0.10.
Diverse capacity types, for example, plain content, RCFile, HBase, ORC, and others.
Metadata stockpiling in a RDBMS, essentially decreasing an opportunity to perform semantic checks during inquiry execution.
Working on compacted data put away into the Hadoop biological system utilizing calculations including DEFLATE, BWT, smart, and so on.
Worked in client characterized capacities (UDFs) to control dates, strings, and other data-mining tools. Hive underpins stretching out the UDF set to deal with use-cases not bolstered by worked in capacities.
SQL-like questions (HiveQL), which are verifiably changed over into MapReduce, or Spark employments.
Apache Spark Strengths:
Flash SQL has various intriguing highlights:
it underpins various document arrangements, for example, Parquet, Avro, Text, JSON, ORC
it bolsters data put away in HDFS, Apache HBase, Cassandra and Amazon S3
it underpins traditional Hadoop codecs, for example, smart, lzo, gzip
it gives security through authentification by means of the utilization of a "common mystery" (spark.authenticate=true on YARN, or spark.authenticate.secret on all hubs if not YARN)
encryption, Spark underpins SSL for Akka and HTTP conventions
it bolsters UDFs
it bolsters simultaneous questions and deals with the distribution of memory to the employments (it is conceivable to indicate the capacity of RDD like in-memory just, circle just or memory and plate
it underpins reserving data in memory utilizing a SchemaRDD columnar arrangement (cacheTable(""))exposing ByteBuffer, it can likewise utilize memory-just storing uncovering User object
it underpins settled structures
When to utilize Spark or Hive-
Hive is as yet an extraordinary decision when low inactivity/multiuser support isn't a prerequisite, for example, for clump preparing/ETL. Hive-on-Spark will limit the time windows required for such handling, yet not to a degree that makes Hive appropriate for BI
Flash SQL, lets Spark clients specifically use SQL builds when composing Spark pipelines. It isn't proposed to be a universally useful SQL layer for intelligent/exploratory investigation. In any case, Spark SQL reuses the Hive frontend and metastore, giving you full similarity with existing Hive data, questions, and UDFs. Flash SQL incorporates a cost-based streamlining agent, columnar capacity and code age to make inquiries quick. Simultaneously, it scales to a great many hubs and multi hour inquiries utilizing the Spark motor, which gives full mid-question adaptation to internal failure. The exhibition is greatest bit of leeway of Spark SQL.
