- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
PostgreSQL count higher than Spark dataframe
- Labels:
-
Apache Spark
Created ‎07-10-2018 06:56 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
When I try to Write a Dataframe to PostgreSQL using Spark Scala, I have noticed that the count on PostgreSQL is always higher than what is get in Spark Scala. The count in spark dataframe is correct & expected.
I have even tried to load the data on monthly basis in parts but the Count in postgreSQL is higher than Spark dataframe
df=sqlContext.read.option("compression","snappy").parquet("/user-data/xyz/input/TABLE/") val connection="jdbc:postgresql://localhost:5449/adb?user=aschema&password=abc" val prop = new java.util.Properties prop.setProperty("driver", "org.postgresql.Driver") df.write.mode("Overwrite").jdbc(url= connection, table = "adb.aschema.TABLE", connectionProperties = prop)
Created ‎08-05-2018 06:56 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Solved it.
Noticed that writing to Postgresql was accurate if i read parquet with second option below.
parquet("/user-data/xyz/input/TABLE/*) // WRONG numbers in PostgreSQL
parquet("/user-data/xyz/input/TABLE/evnt_month=*) // Correct numbers in postgreSQL
If someone is aware of such problem, please comment.
Created ‎07-10-2018 12:40 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I recommend you try to find a small subset of data where you see the count does not match, for example do monthly, then daily and then by hours to try to narrow down and be able to find hopefully which rows are perhaps missing on postgre. This will provide more information as you can review the rows data and hopefully find something.
HTH
Created ‎07-11-2018 10:11 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Felix Albani The table is having millions of records so it's very difficult to identify the missing or extra rows in PostgreSQL.
Is there any known issue in spark for postgresql to not match count ?.
Created ‎08-05-2018 06:56 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Solved it.
Noticed that writing to Postgresql was accurate if i read parquet with second option below.
parquet("/user-data/xyz/input/TABLE/*) // WRONG numbers in PostgreSQL
parquet("/user-data/xyz/input/TABLE/evnt_month=*) // Correct numbers in postgreSQL
If someone is aware of such problem, please comment.
