Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Spark RDD/Dataframe caching

avatar
Contributor

Suppose I have the following piece of code:

val a = sc.textfile("path/to/file")
val b = a.filter(<something..>).groupBy(<something..>)
val c = b.filter(<something..>).groupBy(<something..>)
val d = c.<some transform>
val e = d.<some transform>
val sum1 = e.reduce(<reduce func>)
val sum2 = b.reduce(<reduce func>)

Note that I have not used any cache/persist command.

Since the RDD b is being used again in the last action, will Spark automatically cache it? Or will it be recalculated again from the dataset?

Will the behaviour be the same, if I use DataFrame for the above steps?

Lastly, at any point of time will the RDDs c or d exist? Or will Spark look ahead to check that they are not used in any actions, and consequently chain the transformations for c and d into b and directly calculate e?

I am new to Spark and am trying to understand the basics.

Regards,

Anirban

1 ACCEPTED SOLUTION

avatar
New Contributor

Hi Anirban,

All transformations in Spark are lazy, in that they do not compute their results right away. Instead, they just remember the transformations applied to some base dataset (e.g. a file). The transformations are only computed when an action requires a result to be returned to the driver program. This design enables Spark to run more efficiently. For example, we can realize that a dataset created through map will be used in a reduce and return only the result of the reduce to the driver, rather than the larger mapped dataset.

By default, each transformed RDD may be recomputed each time you run an action on it. However, you may also persist an RDD in memory using the persist (or cache) method, in which case Spark will keep the elements around on the cluster for much faster access the next time you query it. There is also support for persisting RDDs on disk, or replicated across multiple nodes.

More: http://spark.apache.org/docs/2.1.1/programming-guide.html

Regards,

Jan

View solution in original post

2 REPLIES 2

avatar
New Contributor

Hi Anirban,

All transformations in Spark are lazy, in that they do not compute their results right away. Instead, they just remember the transformations applied to some base dataset (e.g. a file). The transformations are only computed when an action requires a result to be returned to the driver program. This design enables Spark to run more efficiently. For example, we can realize that a dataset created through map will be used in a reduce and return only the result of the reduce to the driver, rather than the larger mapped dataset.

By default, each transformed RDD may be recomputed each time you run an action on it. However, you may also persist an RDD in memory using the persist (or cache) method, in which case Spark will keep the elements around on the cluster for much faster access the next time you query it. There is also support for persisting RDDs on disk, or replicated across multiple nodes.

More: http://spark.apache.org/docs/2.1.1/programming-guide.html

Regards,

Jan

avatar
Contributor

hmmm.. understood.

thanks @Jan Rock