Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Spark SQL in-memory space managment

Solved Go to solution

Spark SQL in-memory space managment

Rising Star

Hi. Considering a spark sql or data set with 400 columns and 1 million rows. Not all rows have all 400 columns populated and essentially they cant be as not null columns as well. Need to understand if null value consumes space in memory and if so how much does it take. Do we have any fact sheet or article of all data types size in bytes or bits.

1 ACCEPTED SOLUTION

Accepted Solutions
Highlighted

Re: Spark SQL in-memory space managment

@Mothilal marimuthu

You did not specify whether you are talking about RDD, Datasets or Dataframe.

Anyhow, let't assume RDD. It is not like a columnar database where you account only for the key-value. This is a row-based format. There is cost associated with empty values. I cannot tell you the exact cost because it depends on your data types, but there is cost to it.

Why don't you run yourself a test. Persist your test RDD (small) with all values completed, then one with partial values, some of them null. Again, the data type matters. You can experiment by using null values on columns of the same type, then another RDD for a different type, etc.

rdd.persist(StorageLevel.MEMORY_AND_DISK)

View solution in original post

2 REPLIES 2
Highlighted

Re: Spark SQL in-memory space managment

@Mothilal marimuthu

You did not specify whether you are talking about RDD, Datasets or Dataframe.

Anyhow, let't assume RDD. It is not like a columnar database where you account only for the key-value. This is a row-based format. There is cost associated with empty values. I cannot tell you the exact cost because it depends on your data types, but there is cost to it.

Why don't you run yourself a test. Persist your test RDD (small) with all values completed, then one with partial values, some of them null. Again, the data type matters. You can experiment by using null values on columns of the same type, then another RDD for a different type, etc.

rdd.persist(StorageLevel.MEMORY_AND_DISK)

View solution in original post

Highlighted

Re: Spark SQL in-memory space managment

Rising Star
Don't have an account?
Coming from Hortonworks? Activate your account here