Support Questions

Find answers, ask questions, and share your expertise

Spark SQL in-memory space managment

avatar
Expert Contributor

Hi. Considering a spark sql or data set with 400 columns and 1 million rows. Not all rows have all 400 columns populated and essentially they cant be as not null columns as well. Need to understand if null value consumes space in memory and if so how much does it take. Do we have any fact sheet or article of all data types size in bytes or bits.

1 ACCEPTED SOLUTION

avatar
Super Guru

@Mothilal marimuthu

You did not specify whether you are talking about RDD, Datasets or Dataframe.

Anyhow, let't assume RDD. It is not like a columnar database where you account only for the key-value. This is a row-based format. There is cost associated with empty values. I cannot tell you the exact cost because it depends on your data types, but there is cost to it.

Why don't you run yourself a test. Persist your test RDD (small) with all values completed, then one with partial values, some of them null. Again, the data type matters. You can experiment by using null values on columns of the same type, then another RDD for a different type, etc.

rdd.persist(StorageLevel.MEMORY_AND_DISK)

View solution in original post

2 REPLIES 2

avatar
Super Guru

@Mothilal marimuthu

You did not specify whether you are talking about RDD, Datasets or Dataframe.

Anyhow, let't assume RDD. It is not like a columnar database where you account only for the key-value. This is a row-based format. There is cost associated with empty values. I cannot tell you the exact cost because it depends on your data types, but there is cost to it.

Why don't you run yourself a test. Persist your test RDD (small) with all values completed, then one with partial values, some of them null. Again, the data type matters. You can experiment by using null values on columns of the same type, then another RDD for a different type, etc.

rdd.persist(StorageLevel.MEMORY_AND_DISK)

avatar
Expert Contributor