Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

DataFrames with Kryo serialization

avatar
Contributor

When using DataFrames (Dataset<Row>), there's no option for an Encoder. Does that mean DataFrames (since it builds on top of an RDD) uses Java serialization? Does using Kyro make sense as an optimization here?If not, what's the difference between Java/Kyro serialization, Tungsten, and Encoders? Thank you!

1 ACCEPTED SOLUTION

avatar

When using RDD’s in your Java or Scala Spark code, Spark distributes the data to nodes within the cluster by using the default Java serialization. For Java and Scala objects, Spark has to send the data and structure between nodes. Java serialization doesn’t result in small byte-arrays, whereas Kyro serialization does produce smaller byte-arrays. Thus, you can store more using the same amount of memory when using Kyro. Furthermore, you can also add compression such as snappy.

WIth RDD's and Java serialization there is also an additional overhead of garbage collection.

If your working with RDD's, use Kyro serialization.

With DataFrames, a schema is used to describe the data and Spark only passes data between nodes, not the structure. Thus, for certain types of computation on specific file formats you can expect faster performance.

It's not 100% true that DataFrames always outperform RDD's. Please see my post here:

https://community.hortonworks.com/content/kbentry/42027/rdd-vs-dataframe-vs-sparksql.html

View solution in original post

3 REPLIES 3

avatar

When using RDD’s in your Java or Scala Spark code, Spark distributes the data to nodes within the cluster by using the default Java serialization. For Java and Scala objects, Spark has to send the data and structure between nodes. Java serialization doesn’t result in small byte-arrays, whereas Kyro serialization does produce smaller byte-arrays. Thus, you can store more using the same amount of memory when using Kyro. Furthermore, you can also add compression such as snappy.

WIth RDD's and Java serialization there is also an additional overhead of garbage collection.

If your working with RDD's, use Kyro serialization.

With DataFrames, a schema is used to describe the data and Spark only passes data between nodes, not the structure. Thus, for certain types of computation on specific file formats you can expect faster performance.

It's not 100% true that DataFrames always outperform RDD's. Please see my post here:

https://community.hortonworks.com/content/kbentry/42027/rdd-vs-dataframe-vs-sparksql.html

avatar
Contributor

Hi Binu, thanks for the answer, but since for DataFrames, Spark still passes data between nodes, does Kryo still make sense as an optimization?

avatar

use kyro when working with RDD's. prob won't help with DatFrames. I never used kyro with DataFrames. maybe you can test and post your results