Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

DataFrames with Kryo serialization

Solved Go to solution
Highlighted

DataFrames with Kryo serialization

Explorer

When using DataFrames (Dataset<Row>), there's no option for an Encoder. Does that mean DataFrames (since it builds on top of an RDD) uses Java serialization? Does using Kyro make sense as an optimization here?If not, what's the difference between Java/Kyro serialization, Tungsten, and Encoders? Thank you!

1 ACCEPTED SOLUTION

Accepted Solutions
Highlighted

Re: DataFrames with Kryo serialization

When using RDD’s in your Java or Scala Spark code, Spark distributes the data to nodes within the cluster by using the default Java serialization. For Java and Scala objects, Spark has to send the data and structure between nodes. Java serialization doesn’t result in small byte-arrays, whereas Kyro serialization does produce smaller byte-arrays. Thus, you can store more using the same amount of memory when using Kyro. Furthermore, you can also add compression such as snappy.

WIth RDD's and Java serialization there is also an additional overhead of garbage collection.

If your working with RDD's, use Kyro serialization.

With DataFrames, a schema is used to describe the data and Spark only passes data between nodes, not the structure. Thus, for certain types of computation on specific file formats you can expect faster performance.

It's not 100% true that DataFrames always outperform RDD's. Please see my post here:

https://community.hortonworks.com/content/kbentry/42027/rdd-vs-dataframe-vs-sparksql.html

View solution in original post

3 REPLIES 3
Highlighted

Re: DataFrames with Kryo serialization

When using RDD’s in your Java or Scala Spark code, Spark distributes the data to nodes within the cluster by using the default Java serialization. For Java and Scala objects, Spark has to send the data and structure between nodes. Java serialization doesn’t result in small byte-arrays, whereas Kyro serialization does produce smaller byte-arrays. Thus, you can store more using the same amount of memory when using Kyro. Furthermore, you can also add compression such as snappy.

WIth RDD's and Java serialization there is also an additional overhead of garbage collection.

If your working with RDD's, use Kyro serialization.

With DataFrames, a schema is used to describe the data and Spark only passes data between nodes, not the structure. Thus, for certain types of computation on specific file formats you can expect faster performance.

It's not 100% true that DataFrames always outperform RDD's. Please see my post here:

https://community.hortonworks.com/content/kbentry/42027/rdd-vs-dataframe-vs-sparksql.html

View solution in original post

Highlighted

Re: DataFrames with Kryo serialization

Explorer

Hi Binu, thanks for the answer, but since for DataFrames, Spark still passes data between nodes, does Kryo still make sense as an optimization?

Highlighted

Re: DataFrames with Kryo serialization

use kyro when working with RDD's. prob won't help with DatFrames. I never used kyro with DataFrames. maybe you can test and post your results

Don't have an account?
Coming from Hortonworks? Activate your account here