Created 08-04-2016 07:48 AM
I added a config property called spark.local.dir, and this seemed to resolve this issue below, and I can select from tables when connecting through port 10015 in beeline.
I set it to /tmp, as it needs to be writeable by the spark process. I tried making a sub-directory called /tmp/spark-tmp, and change ownership to spark:hadoop but it didn’t like it for some reason. Maybe because it wasn’t executable.
From /var/log/spark/spark-hive-org.apache.spark.sql.hive.thriftserver.HiveThriftServer2-1-mdzusvpclhdp001.mdz.local.out:
16/08/03 14:58:14 ERROR DiskBlockManager: Failed to create local dir in /tmp/spark-tmp. Ignoring this directory.
java.io.IOException: Failed to create a temp directory (under /tmp/spark-tmp) after 10 attempts!
Created 12-26-2016 10:25 PM
Your spark use must be able to create folder under that /tmp/spark-tmp. Based on your comments you did not grant ownership successfully. You should grant recursive ownership of /tmp as such that it will include all subfolders existent or created at runtime:
chown spark -R /tmp
I assumed your user is spark.
However, I really don't like the idea of using /tmp for that (SA taste).You should use maybe a folder created under SPARK_HOME.
Created 12-26-2016 10:25 PM
Your spark use must be able to create folder under that /tmp/spark-tmp. Based on your comments you did not grant ownership successfully. You should grant recursive ownership of /tmp as such that it will include all subfolders existent or created at runtime:
chown spark -R /tmp
I assumed your user is spark.
However, I really don't like the idea of using /tmp for that (SA taste).You should use maybe a folder created under SPARK_HOME.