I want to use sparks JDBC connection to write a data frame to oracle. My data frame has a string column which is very long. At least longer than the default 255 characters the spark will allocate when creating the schema. How can I still write to the oracle table? Does it work when I manually create the schema first with CLOB datatypes? If yes, how can I get spark to only `TRUNCATE` instead of overwrite the table?
Hi @Georg Heiler,
this can be possible by specifying the datatypes of the JDBC columns so that when when the table get created it create with appropriate datatype.
This has beed addressed in spark v2.2 where you can use target datatypes in JDBC.
but in previous version still not possible as the jira stated (https://issues.apache.org/jira/browse/SPARK-10849)