I realize that kudu and pyodbc are not supported but wondering if someone has run into this before. Using pyodbc to insert rows into a kudu table. DDL, pyodbc call and error below.
The issue is: When the primary key column ("id") is type int, pyodbc throws an error trying to convert it to bigint before insert.
If I define the "id" column as bigint, all is fine. The inserted "id" values will never approach bigint size so would prefer not to use that type.
Is there something wrong below or is there a way to control the conversion attempted ?
CREATE TABLE genomics.pipeline_status( id INT NOT NULL ENCODING AUTO_ENCODING COMPRESSION DEFAULT_COMPRESSION, experiment_id INT NOT NULL ENCODING AUTO_ENCODING COMPRESSION DEFAULT_COMPRESSION, PRIMARY KEY (id) ) PARTITION BY HASH (id) PARTITIONS 3 STORED AS KUDU TBLPROPERTIES ('kudu.master_addresses'='<server>.choa.org') ;
#kerberos ticket already acquired
conn = pyodbc.connect('DSN=IMPALA_DEV', autocommit=True) #print str(connection) with conn.cursor() as cur:
cur.execute("insert into genomics.pipeline_status (id, experiment_id) values (?,?)", (int(id), int(experiment_id)))
getting sync rundb results FAILED: ('HY000', "[HY000] [Cloudera][ImpalaODBC] (110) Error while executing a query in Impala: [HY000] : AnalysisException: Possible loss of precision for target table 'genomics.pipeline_status'.\nExpression 'cast(5 as bigint)' (type: BIGINT) would need to be cast to INT for column 'id'\n (110) (SQLExecDirectW)")