Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

PostgreSQL count higher than Spark dataframe

Solved Go to solution

PostgreSQL count higher than Spark dataframe

New Contributor

When I try to Write a Dataframe to PostgreSQL using Spark Scala, I have noticed that the count on PostgreSQL is always higher than what is get in Spark Scala. The count in spark dataframe is correct & expected.

I have even tried to load the data on monthly basis in parts but the Count in postgreSQL is higher than Spark dataframe

df=sqlContext.read.option("compression","snappy").parquet("/user-data/xyz/input/TABLE/")
val connection="jdbc:postgresql://localhost:5449/adb?user=aschema&password=abc" 
val prop = new java.util.Properties 
prop.setProperty("driver", "org.postgresql.Driver") 
df.write.mode("Overwrite").jdbc(url=  connection, table = "adb.aschema.TABLE", connectionProperties  = prop)
1 ACCEPTED SOLUTION

Accepted Solutions

Re: PostgreSQL count higher than Spark dataframe

New Contributor

Solved it.

Noticed that writing to Postgresql was accurate if i read parquet with second option below.

parquet("/user-data/xyz/input/TABLE/*) // WRONG numbers in PostgreSQL

parquet("/user-data/xyz/input/TABLE/evnt_month=*) // Correct numbers in postgreSQL

If someone is aware of such problem, please comment.

3 REPLIES 3

Re: PostgreSQL count higher than Spark dataframe

@Team Spark

I recommend you try to find a small subset of data where you see the count does not match, for example do monthly, then daily and then by hours to try to narrow down and be able to find hopefully which rows are perhaps missing on postgre. This will provide more information as you can review the rows data and hopefully find something.

HTH

Re: PostgreSQL count higher than Spark dataframe

New Contributor

@Felix Albani The table is having millions of records so it's very difficult to identify the missing or extra rows in PostgreSQL.

Is there any known issue in spark for postgresql to not match count ?.

Re: PostgreSQL count higher than Spark dataframe

New Contributor

Solved it.

Noticed that writing to Postgresql was accurate if i read parquet with second option below.

parquet("/user-data/xyz/input/TABLE/*) // WRONG numbers in PostgreSQL

parquet("/user-data/xyz/input/TABLE/evnt_month=*) // Correct numbers in postgreSQL

If someone is aware of such problem, please comment.