Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

PostgreSQL count higher than Spark dataframe

avatar
Explorer

When I try to Write a Dataframe to PostgreSQL using Spark Scala, I have noticed that the count on PostgreSQL is always higher than what is get in Spark Scala. The count in spark dataframe is correct & expected.

I have even tried to load the data on monthly basis in parts but the Count in postgreSQL is higher than Spark dataframe

df=sqlContext.read.option("compression","snappy").parquet("/user-data/xyz/input/TABLE/")
val connection="jdbc:postgresql://localhost:5449/adb?user=aschema&password=abc" 
val prop = new java.util.Properties 
prop.setProperty("driver", "org.postgresql.Driver") 
df.write.mode("Overwrite").jdbc(url=  connection, table = "adb.aschema.TABLE", connectionProperties  = prop)
1 ACCEPTED SOLUTION

avatar
Explorer
hide-solution

This problem has been solved!

Want to get a detailed solution you have to login/registered on the community

Register/Login
3 REPLIES 3

avatar

@Team Spark

I recommend you try to find a small subset of data where you see the count does not match, for example do monthly, then daily and then by hours to try to narrow down and be able to find hopefully which rows are perhaps missing on postgre. This will provide more information as you can review the rows data and hopefully find something.

HTH

avatar
Explorer

@Felix Albani The table is having millions of records so it's very difficult to identify the missing or extra rows in PostgreSQL.

Is there any known issue in spark for postgresql to not match count ?.

avatar
Explorer
hide-solution

This problem has been solved!

Want to get a detailed solution you have to login/registered on the community

Register/Login