I am trying to load a PySpark Dataframe into an Impala table, using the jdbc connector. However, the df.write statement fails, because the "Create table" - statement that is generated contains quotation marks for the column names:
Do you have any idea how to get rid of these quotation marks? If not, what would be a different approach to load a dataframe into an Impala table?
I also tried
spark.sql('select identifier_id as identifier from tempView').write.jdbc(...), but here I am getting the error "File /tmp/hive does not exist".