Spark Cache and persist are optimization techniques for iterative and interactive Spark applications to improve the performance of the jobs or applications. In this article, you will learn What is Spark Caching and Persistence, the difference between
Persist() methods and how to use these two with RDD, DataFrame, and Dataset with Scala examples.
Though Spark provides computation 100 x times faster than traditional Map Reduce jobs, If you have not designed the jobs to reuse the repeating computations you will see degrade in performance when you are dealing with billions or trillions of data. Hence, we may need to look at the stages and use optimization techniques as one of the ways to improve performance.
Spark Cache vs Persist
persist() methods, Spark provides an optimization mechanism to store the intermediate computation of an RDD, DataFrame, and Dataset so they can be reused in subsequent actions(reusing the RDD, Dataframe, and Dataset computation result’s).
Both caching and persisting are used to save the Spark RDD, Dataframe and Dataset’s. But, the difference is, RDD cache() method default saves it to memory (MEMORY_ONLY) whereas persist() method is used to store it to user-defined storage level.
When you persist a dataset, each node stores it’s partitioned data in memory and reuses them in other actions on that dataset. And Spark’s persisted data on nodes are fault-tolerant meaning if any partition of a Dataset is lost, it will automatically be recomputed using the original transformations that created it.
Advantages for Caching and Persistence
Below are the advantages of using Spark Cache and Persist methods.
Cost efficient – Spark computations are very expensive hence reusing the computations are used to save cost.
Time efficient – Reusing the repeated computations saves lots of time.
Execution time – Saves execution time of the job and we can perform more jobs on the same cluster.
Below I will explain how to use Spark Cache and Persist with DataFrame or Dataset.
Spark Cache Syntax and Example
Spark DataFrame or Dataset caching by default saves it to storage level `
<strong>MEMORY_AND_DISK</strong>` because recomputing the in-memory columnar representation of the underlying table is expensive. Note that this is different from the default cache level of `
RDD.cache()` which is ‘
cache() : Dataset.this.type
cache() method in Dataset class internally calls
persist() method which in turn uses
sparkSession.sharedState.cacheManager.cacheQuery to cache the result set of DataFrame or Dataset. Let’s look at an example.
val spark:SparkSession = SparkSession.builder() .master("local") .appName("SparkByExamples.com") .getOrCreate() import spark.implicits._ val columns = Seq("Seqno","Quote") val data = Seq(("1", "Be the change that you wish to see in the world"), ("2", "Everyone thinks of changing the world, but no one thinks of changing himself."), ("3", "The purpose of our lives is to be happy.")) val df = data.toDF(columns:_*) val dfCache = df.cache() dfCache.show(false)
Spark Persist Syntax and Example
Spark persist has two signature first signature doesn’t take any argument which by default saves it to
<strong>MEMORY_AND_DISK</strong> storage level and the second signature which takes
StorageLevel as an argument to store it to different storage levels.
1) persist() : Dataset.this.type 2) persist(newLevel : org.apache.spark.storage.StorageLevel) : Dataset.this.type
val dfPersist = df.persist() dfPersist.show(false)
Using second signature you can save DataFrame/Dataset to One of storage levels
val dfPersist = df.persist(StorageLevel.MEMORY_ONLY) dfPersist.show(false)
This stores DataFrame/Dataset into Memory.
Unpersist syntax and Example
We can also unpersist the persistence DataFrame or Dataset to remove from the memory or storage.
unpersist() : Dataset.this.type unpersist(blocking : scala.Boolean) : Dataset.this.type
val dfPersist = dfPersist.unpersist() dfPersist.show(false)
unpersist(Boolean) with boolean as argument blocks until all blocks are deleted.
Spark Persistance storage levels
All different storage level Spark supports are available at
org.apache.spark.storage.StorageLevel class. The storage level specifies how and where to persist or cache a Spark DataFrame and Dataset.
<strong>MEMORY_ONLY</strong> – This is the default behavior of the RDD
cache() method and stores the RDD or DataFrame as deserialized objects to JVM memory. When there is no enough memory available it will not save DataFrame of some partitions and these will be re-computed as and when required. This takes more memory. but unlike RDD, this would be slower than MEMORY_AND_DISK level as it recomputes the unsaved partitions and recomputing the in-memory columnar representation of the underlying table is expensive
<strong>MEMORY_ONLY_SER</strong> – This is the same as
MEMORY_ONLY but the difference being it stores RDD as serialized objects to JVM memory. It takes lesser memory (space-efficient) then MEMORY_ONLY as it saves objects as serialized and takes an additional few more CPU cycles in order to deserialize.
<strong>MEMORY_ONLY_2</strong> – Same as
MEMORY_ONLY storage level but replicate each partition to two cluster nodes.
<strong>MEMORY_ONLY_SER_2</strong> – Same as
MEMORY_ONLY_SER storage level but replicate each partition to two cluster nodes.
<strong>MEMORY_AND_DISK</strong> – This is the default behavior of the DataFrame or Dataset. In this Storage Level, The DataFrame will be stored in JVM memory as a deserialized objects. When required storage is greater than available memory, it stores some of the excess partitions into disk and reads the data from disk when it required. It is slower as there is I/O involved.
<strong>MEMORY_AND_DISK_SER</strong> – This is same as
MEMORY_AND_DISK storage level difference being it serializes the DataFrame objects in memory and on disk when space not available.
<strong>MEMORY_AND_DISK_2</strong> – Same as
MEMORY_AND_DISK storage level but replicate each partition to two cluster nodes.
<strong>MEMORY_AND_DISK_SER_2</strong> – Same as
MEMORY_AND_DISK_SER storage level but replicate each partition to two cluster nodes.
<strong>DISK_ONLY</strong> – In this storage level, DataFrame is stored only on disk and the CPU computation time is high as I/O involved.
<strong>DISK_ONLY_2</strong> – Same as
DISK_ONLY storage level but replicate each partition to two cluster nodes.
Below are the table representation of the Storage level, Go through the impact of space, cpu and performance choose the one that best fits for you.
Storage Level Space used CPU time In memory On-disk Serialized Recompute some partitions ---------------------------------------------------------------------------------------------------- MEMORY_ONLY High Low Y N N Y MEMORY_ONLY_SER Low High Y N Y Y MEMORY_AND_DISK High Medium Some Some Some N MEMORY_AND_DISK_SER Low High Some Some Y N DISK_ONLY Low High N Y Y N
Some Points to note on Persistence
- Spark automatically monitors every persist() and cache() calls you make and it checks usage on each node and drops persisted data if not used or using least-recently-used (LRU) algorithm. As discussed in one of the above section you can also manually remove using
- Spark caching and persistence is just one of the optimization techniques to improve the performance of Spark jobs.
- For RDD cache() default storage level is ‘
MEMORY_ONLY‘ but, for DataFrame and Dataset, default is ‘
- On Spark UI, the Storage tab shows where partitions exist in memory or disk across the cluster.
cache()is an alias for
- Caching of Spark DataFrame or Dataset is a lazy operation, meaning a DataFrame will not be cached until you trigger an action.
In this article, you have learned Spark cache and Persist methods are optimization techniques to save interim computation results and use them subsequently and learned what is the difference between Spark Cache and Persist and finally saw their syntaxes and usages with Scala examples.
Happy Learning !!