Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Error: PutDatabaseRecord + Hive connection pool

avatar
Contributor

Hello,

I'm using the processor PutDatabaseRecord with a Hive connection pool to insert data in my table. But I get the following error:

Error :

My log file :

77416-fichierlog.png

77415-error.png

I used the same connection pool with putHiveQL and everything is working fine.

Can someone help me plz?

Thank you

1 ACCEPTED SOLUTION

avatar
Master Guru
hide-solution

This problem has been solved!

Want to get a detailed solution you have to login/registered on the community

Register/Login
5 REPLIES 5

avatar
Master Guru
hide-solution

This problem has been solved!

Want to get a detailed solution you have to login/registered on the community

Register/Login

avatar
Contributor

@Matt Burgess Thank you for your response

I'm using hive-jdbc-2.1.0.jar.

So the processor PutDatabaseRecord does not offer the insert functionality wih a hive pool connection? if it is the case ,why this connection is proposed in the processor ? Can we do an update and delete query with PutDatabaseRecord + Hive ?

avatar
Master Guru

The HiveConnectionPool is a special type of DBCPConnectionPool, so instances of it get listed with all the others, as it is not the connection pool that doesn't support the operations, but the driver itself.

What format is the input data in? You should be able to use ConvertRecord with a FreeFormTextWriter to generate SQL from your input data (don't forget the semicolon at the end of the line), then you can send that to PutHiveQL.

avatar
Contributor

@Matt Burgess

I have a csv file .. I will try this approach

Thank you ! I accept this answer 🙂

avatar
New Contributor

Hi @mburgess,

 

I was looking for a processor that would let me pull a query out of a field of the incoming flow file, instead of turning the entire flow file into a query. PutDatabaseRecord allowed to do that (which is when I discovered that the Hive connection does not support an explicit call of conn.commit()).

 

I want to keep as much of the flow file intact as possible, is there a way to do that?

 

Thank you.