Hm, I am seeing this on both 5.6, and 5.7..
What are you using as spark.master? I know it depends on the version of spark, but I'm using "yarn".
Interesting tip: if you run "set;" in beeline, it will print out all the beeline settings (including hadoop ones, if configured properly).
Yes, I was wondering the same thing! Can anyone else confirm that this is working?
What you could try to pinpoint the problem is to test your spark setup, by using:
MASTER=yarn-cluster /usr/lib/spark/bin/run-example SparkPi 10
MASTER=yarn-client /usr/lib/spark/bin/run-example SparkPi 10
It is a wrapper script running the jobs via spark-submit.
for me, yarn-cluster mode will not print any results, but I can confirm that the job works, and that it stores the result in the cluster. So, if it doesn't fail, and you can see that spark job was submitted and executed, the problem could still be on hive side.