Created 01-19-2017 04:38 PM
Hi. Considering a spark sql or data set with 400 columns and 1 million rows. Not all rows have all 400 columns populated and essentially they cant be as not null columns as well. Need to understand if null value consumes space in memory and if so how much does it take. Do we have any fact sheet or article of all data types size in bytes or bits.
Created 02-21-2017 10:21 PM
You did not specify whether you are talking about RDD, Datasets or Dataframe.
Anyhow, let't assume RDD. It is not like a columnar database where you account only for the key-value. This is a row-based format. There is cost associated with empty values. I cannot tell you the exact cost because it depends on your data types, but there is cost to it.
Why don't you run yourself a test. Persist your test RDD (small) with all values completed, then one with partial values, some of them null. Again, the data type matters. You can experiment by using null values on columns of the same type, then another RDD for a different type, etc.
rdd.persist(StorageLevel.MEMORY_AND_DISK)
Created 02-21-2017 10:21 PM
You did not specify whether you are talking about RDD, Datasets or Dataframe.
Anyhow, let't assume RDD. It is not like a columnar database where you account only for the key-value. This is a row-based format. There is cost associated with empty values. I cannot tell you the exact cost because it depends on your data types, but there is cost to it.
Why don't you run yourself a test. Persist your test RDD (small) with all values completed, then one with partial values, some of them null. Again, the data type matters. You can experiment by using null values on columns of the same type, then another RDD for a different type, etc.
rdd.persist(StorageLevel.MEMORY_AND_DISK)
Created 04-21-2017 06:12 PM
Thanks @Constantin Stanca