Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Error: Writing Parquet data to PosgreSQL using PutDatabaseRecord processor

Highlighted

Error: Writing Parquet data to PosgreSQL using PutDatabaseRecord processor

Explorer

I am trying to write data to PostgreSQL table using PutDatabaseRecord processor reading it from AWS S3. But getting below error:

 
putdatabaserecord.error: INT96 not yet implemented.
 
Screenshot of flow is here:
 
1.JPG
 
SS of settings of PutDatabaseRecord is as follows:
2.JPG
 
Note: Checked S3 connectivity, listing and fetch both are happening correctly. Also checked DB connection, that too works fine with a test of a small csv file load into a table.
 
How to successfully do it?
4 REPLIES 4
Highlighted

Re: Error: Writing Parquet data to PosgreSQL using PutDatabaseRecord processor

@Raj2cool16  Can you share your schema?  

 


 


If this answer resolves your issue or allows you to move forward, please choose to ACCEPT this solution and close this topic. If you have further dialogue on this topic please comment here or feel free to private message me. If you have new questions related to your Use Case please create separate topic and feel free to tag me in your post.  


 


Thanks,



Steven

Highlighted

Re: Error: Writing Parquet data to PosgreSQL using PutDatabaseRecord processor

Explorer

This is the schema at target:

 

CREATE TABLE dev.ctrmovement_test_ab2
(
"T_CODE" text,
"C_NO" text,
"L_NO" text,
"INT_C_NO" bigint,
"C_CATEGORY" text,
"C_WEIGHT" float8,
"V_SERVICE" text,
"V_NAME" text,
"V_NO" text,
"V_LOCATION" text,
"INT_VIA_NO" bigint,
"B_NO" text,
"FPD" text,
"C_SIZE" bigint,
"C_HEIGHT" bigint,
"C_TYPE" text,
"F_EMPTY" text,
"I_CD" text,
"C_ENTRY_MODE" text,
"C_EXIT_MODE" text,
"IV" text,
"FROM_LOCATION" text,
"TO_LOCATION" text,
"E_NAME" text,
"Q_NAME" text,
"Q_COMPLETION_TIME" timestamp,
"Y_COMPLETION_TIME" timestamp,
"CAL_C_TIME" timestamp,
"O_NAME" text,
"O_FULL_NAME" text,
"M_TYPE" text,
"POL" text,
"POD" text,
"COA" text,
"S_LINE" text,
"PLANNED_LOCATION" text,
"T_FLG" text,
"ENTRY_DATE" timestamp,
"EXIT_DATE" timestamp
);

 

It is exactly the same in Parquet file, as the data was fetched using AWS DMS from the same database, same table and created a parquet file out of it.

 

Any help as to why this should happen?

Highlighted

Re: Error: Writing Parquet data to PosgreSQL using PutDatabaseRecord processor

@Raj2cool16   I was looking to see the Avro Schema in the Record Reader.   I think you have another post on this same issue.  I also asked there to see screen shots of the full config and schemas.  When we can see that info in the post, the community can better understand if there is an adjustment needed to resolve the original issue.

 


 


If this answer resolves your issue or allows you to move forward, please choose to ACCEPT this solution and close this topic. If you have further dialogue on this topic please comment here or feel free to private message me. If you have new questions related to your Use Case please create separate topic and feel free to tag me in your post.  


 


Thanks,



Steven

Re: Error: Writing Parquet data to PosgreSQL using PutDatabaseRecord processor

Explorer

Hi @stevenmatison , its not the same issue, this error showed up when I am trying to load a Parquet file into a particular PostgreSQL table whereas the other post is related to an error when trying to load one csv file into a particular PostgreSQL table.

 

I'll share the SS of the whole configuration of PutDatabaseRecord and CSVReader in the other post.

 

Here is the SS of the whole configuration of PutDatabaseRecord and CSVReader.

 

PutDatabaseRecord SS Part 1:

3.JPG

 

PutDatabaseRecord SS Part 2:

4.JPG

 

ParquetReader part 1:

ParquetReader Part 1ParquetReader Part 1

 

ParquetReader Part 2:

ParquetReader Part 2ParquetReader Part 2

 

Note: If you see a discrepancy between the table name I wrote in prior queries and in the SS, please ignore, I did that to simply mask.

Don't have an account?
Coming from Hortonworks? Activate your account here