Created 10-28-2016 08:20 PM
I want to read a hql file in spark job. This hql creates a table by joining 3-4 other tables.
I don't want to write the sql statement in the spark job instead I want to pass HQL file as an argument to spark job and then run the hql file.
Is it possible in Spark ?
Created 10-31-2016 06:44 PM
We can use below Scala API to read file:
sqlContext.sql(scala.io.Source.fromFile("/vzhome/agaram8/HQLScripts/count.hql").getLines
Created 10-28-2016 09:11 PM
Created 10-31-2016 02:38 PM
If you are looking to do it from a program then try something like the below:
http://stackoverflow.com/questions/31313361/sparksql-hql-script-in-file-to-be-loaded-on-python-code
Created 10-31-2016 02:04 PM
This link explains how to execute hive sql using spark-sql shell. But I want to the call the file programatically not through shell.
Created 10-31-2016 02:47 PM
I went through this Stackoverflow link, but I don't see any 'open' API in spark...getting compiler error...
Created 10-31-2016 03:18 PM
"open" is not a spark api command, it is a python command. What language are you using? Replace open("file.hql").read() with the equivalent command/code-block in that language.
Created 10-31-2016 06:29 PM
I'm using scala..couldn't find equivalent "open" API in scala.
Created 10-31-2016 06:44 PM
We can use below Scala API to read file:
sqlContext.sql(scala.io.Source.fromFile("/vzhome/agaram8/HQLScripts/count.hql").getLines
Created 10-24-2017 11:24 AM
Hi
Amit Kumar Agarwal
I am looking to run Hive HQL from SPARK SQL.. could you please provide the guidance for same.
Thanks,
Deepesh
deepeshnema@gmail.com