I am running Spark SQL on spark V 1.6 in Scala by calling it thru shell script.
When any of the step failed during creation of dataframe or inserting data into hive table, still the steps followed by that are executing.
Below are the errors:
org.apache.spark.sql.AnalysisException: Partition column batchdate not found in existing columns
org.apache.spark.sql.AnalysisException: cannot resolve 'batchdate' given input columns:
error: not found: value DF1
org.apache.spark.sql.AnalysisException: Table not found: locationtable;
How to make my spark-SQL job fail when it returns error and it won't execute subsequent queries, so that control goes to calling shell script.
Thanks!!