Archives of Support Questions (Read Only)

This is an archived board for historical reference. Information and links may no longer be available or relevant
Announcements
This board is archived and read-only for historical reference. To ask a new question, please post a new topic on the appropriate active board.

calculating median on grouped data

avatar
Visitor

Hello! I was trying to use spark to calculate median on grouped values in a dataframe, but have not had much success. I have tried using agg(), but median() is not available; tried to apply rank() to window function but the rank was not grouped; also tried to pivot the table to avoid the grouped step but the data frame is huge (8million rows) and it fails multiple times. Calculating median should be something straightforward to do since data analysts use it a lot. Maybe I'm missing something obvious? 

 

Thanks!!

1 ACCEPTED SOLUTION

avatar
Master Collaborator

Calculating a median or other quantiles is in general much harder than computing a moment like a mean. You want to look for functions like Spark that compute quantiles, rather than look for a median function -- median is the 0.5 quantile. There is an efficient approximate implementation for DataFrames in Spark.

View solution in original post

2 REPLIES 2

avatar
Master Collaborator

Calculating a median or other quantiles is in general much harder than computing a moment like a mean. You want to look for functions like Spark that compute quantiles, rather than look for a median function -- median is the 0.5 quantile. There is an efficient approximate implementation for DataFrames in Spark.

avatar
Visitor

Thanks! Yes percent_rank() and window function together did the trick. A different way is to sort the column and pick the one that is in the middle. The results are close.