Community Articles
Find and share helpful community-sourced technical articles
Labels (2)

While this article provides a mechanism through which we could setup Spark with HiveContext, there are some limitation that when using Spark with HiveContext. For e.x Hive support writing query result to HDFS using the "INSERT OVERWRITE DIRECTORY" i.e

INSERT OVERWRITE DIRECTORY 'hdfs://cl1/tmp/query'


Above command will result is writing the result of above query to HDFS. However if the same query is passed to Spark with HiveContext, this will fail since "INSERT OVERWRITE DIRECTORY" is not a supported feature when using Spark. This is tracked via this jira. If the same needs to be achieved via spark -- it could achieved by using the Spark CSV library ( required in case of Spark1 ).

Below is the code snippet on how to achieve the same.

DataFrame df = hiveContext.sql("SELECT * FROM REGION");
      .option("delimiter", "\u0001")

Above command will save the result in HDFS under dir /tmp/query. Please note the delimiter which is used, this is same as what hive currently supports.

Also below depedency needs to be added to pom.xml

Don't have an account?
Version history
Revision #:
1 of 1
Last update:
‎06-30-2017 09:57 AM
Updated by:
Top Kudoed Authors