I have a CSV file with 2 attributes and also a Hive ORC-based table with same attributes and its data type. Without using temporary hive table, can I directly load this CSV file into Hive ORC table? If there is no way to load CSV file, then can anyone help me to convert a CSV file into ORC file format, so that I'll load this ORC file directly into Hive ORC Table?
Why don't you want a hive external table? It is just a temporary entry without any significant overhead.
You can also use OrcStorage in pig to write orc files directly.
Similar functions are available for spark
Or you might be able to write a custom Mapreduce function using an ORC outputformat.
As @Benjamin Leonhardi said, there is very little overhead to using an external table to do this. The only thing stored in the Hive Metastore is the schema about the CSV and the pointer to where the data is on HDFS. The data is left where you put it on HDFS. Using an external table is a very common way of solving this problem.
Having said that, you can use Pig to load CSV data directly from HDFS. You have to define the schema for the CSV within the Pig script and you can write the data to a Hive ORC table. Be aware that the Hive ORC table must be created before you can write to it with Pig.
Here is a tutorial that covers this: http://hortonworks.com/hadoop-tutorial/how-to-use-basic-pig-commands/
Here is an example of loading CSV data via Pig:
STOCK_A = LOAD '/user/maria_dev/NYSE_daily_prices_A.csv' USING PigStorage(',') AS (exchange:chararray, symbol:chararray, date:chararray, open:float, high:float, low:float, close:float, volume:int, adj_close:float); DESCRIBE STOCK_A;
Assume you have a ORC table "test" in hive that fits to the csv file "test.csv"
sqlContext.read.format("com.databricks.spark.csv") .option("header", "true") .option("delimiter", ",") .load("/tmp/test.csv") .insertInto("test")
If you're looking for a standalone tool to convert CSV to ORC have a look at https://github.com/cartershanklin/csv-to-orc
It's a standalone Java tool that can run anywhere, including off of your Hadoop cluster. It supports custom null strings, row skipping and basic Hive types (no complex types currently)