Created on 10-24-2018 11:47 AM - edited 09-16-2022 06:49 AM
HCatalog is different from Hive? How?
Created 10-24-2018 11:53 AM
HCatalog is a table and storage management layer for Hadoop that enables users with different data processing tools — Pig, MapReduce — to more easily read and write data. HCatalog’s table abstraction presents users with a relational view of data in the Hadoop distributed file system (HDFS) and ensures that users need not worry about where or in what format their data is stored HCatalog supports reading and writing files in any format for which a SerDe (serializer-deserializer) can be written. By default, HCatalog supports RCFile, CSV, JSON, and SequenceFile, and ORC file formats. To use a custom format, you must provide the InputFormat, OutputFormat, and SerDe.
Created 12-10-2018 01:58 AM
Let's start with Hive and then HCatalog.
Hive
⇢ uses HiveQL (HQL) as processing engine
⇢ uses SerDes for serialization and deserialization
⇢ works best with huge volumes of data
HCatalog
⇢ basically, the EDW system for Hadoop (it supports several file formats such as RCFile, CSV, JSON, SequenceFile, ORC)
⇢ is a sub-component of Hive, which enables ETL processes
⇢ tool for accessing metadata that reside in Hive Metastore
⇢ acts as an API to expose the metastore as REST interface to external tools such as Pig
⇢ uses WebHcat, a web server for engaging with the Hive Metastore
I think the focus has to be made on how they complement each other rather than focusing on their differences.
Documentation (3)
I hope this helps! 🙂