Created 06-07-2017 09:31 AM
Hello,
I writing a program that copies data from Hive to an Oracle table.
The copy goes well except for one thing : it changes my Oracle columns' datatype.
In oracle I have a schema like that one :
COL1 VARCHAR2(10 CHAR) NOT NULL,
COL2 VARCHAR2(50 CHAR) NOT NULL,
COL3 VARCHAR2(15 CHAR) NOT NULL,
COL4 NUMBER(10) NOT NULL,
COL5 TIMESTAMP NOT NULL,
COL6 TIMESTAMP,
COL7 NUMBER(10),
COL8 VARCHAR2(1 CHAR),
COL9 DATE,
COL10 DATE,
COL11 DATE
But after Spark has finished the copy, the Oracle schema has changed to this :
COL1 VARCHAR2(255 CHAR),
COL2 VARCHAR2(255 CHAR),
COL3 VARCHAR2(255 CHAR),
COL4 NUMBER(10,0),
COL5 TIMESTAMP(6),
COL6 TIMESTAMP(6),
COL7 NUMBER(10,0),
COL8 NUMBER(10,0),
COL9 TIMESTAMP(6),
COL10 TIMESTAMP(6),
COL11 TIMESTAMP(6)
Even if I change the columns' datatype in Hive, the result is the same.
Is it possible to write from Hive to Oracle without Spark modifying the Oracle schema ?
Thanks !
Created 06-13-2017 07:50 AM
Hello @adrien555,thanks for your post,
I currently have the same problem, I didn't found any solutions to avoid this problem.
Any ideas ?
Created 06-13-2017 11:46 PM