I downloaded the provided zip file, extracted the drivers.csv file, and uploaded it to /users/maria_devas per the instructions. Creating the Hive database works without issue, but trying to load the CSV data yields this error: java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask. org.apache.hadoop.hive.ql.metadata.HiveException: Access denied: Unable to move source hdfs://sandbox-hdp.hortonworks.com:8020/user/maria_dev/drivers.csv to destination hdfs://sandbox-hdp.hortonworks.com:8020/warehouse/tablespace/managed/hive/temp_drivers/base_0000001: Permission denied: user=hive, access=WRITE, inode="/user/maria_dev":maria_dev:hdfs:drwxr-xr-x It does create the file /warehouse/tablespace/managed/hive/temp_drivers/base_0000001/drivers.csv, so it did work to some extent. Yet Data Analytics Studio shows the table as being empty. Tutorial URL: https://hortonworks.com/tutorial/how-to-process-data-with-apache-hive/ This isn't my first issue with the HDP VM's and the tutorials. I'm rather disappointed that things don't work as they should. Update: I edited hdfs-site.xml to disable permissions on HDFS. That fixed the permissions issue above.
... View more
I had this error too. Only solution I found was to shut down and reboot. Apparently sometimes when you shut down the spark context in zeppelin it still somehow persists and hangs. Workaround for me was to stop all services in ambari, reboot the system and start again. Then the session is flushed. (Found it here: https://stackoverflow.com/questions/35515120/why-does-sparkcontext-randomly-close-and-how-do-you-restart-it-from-zeppelin )
... View more