HCatalog is different from Hive? How?
HCatalog is a table and storage management layer for Hadoop that enables users with different data processing tools — Pig, MapReduce — to more easily read and write data. HCatalog’s table abstraction presents users with a relational view of data in the Hadoop distributed file system (HDFS) and ensures that users need not worry about where or in what format their data is stored HCatalog supports reading and writing files in any format for which a SerDe (serializer-deserializer) can be written. By default, HCatalog supports RCFile, CSV, JSON, and SequenceFile, and ORC file formats. To use a custom format, you must provide the InputFormat, OutputFormat, and SerDe.
Let's start with Hive and then HCatalog.
⇢ uses HiveQL (HQL) as processing engine
⇢ uses SerDes for serialization and deserialization
⇢ works best with huge volumes of data
⇢ basically, the EDW system for Hadoop (it supports several file formats such as RCFile, CSV, JSON, SequenceFile, ORC)
⇢ is a sub-component of Hive, which enables ETL processes
⇢ tool for accessing metadata that reside in Hive Metastore
⇢ acts as an API to expose the metastore as REST interface to external tools such as Pig
⇢ uses WebHcat, a web server for engaging with the Hive Metastore
I think the focus has to be made on how they complement each other rather than focusing on their differences.
I hope this helps! :)