Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Setting Compression

avatar
New Contributor

Hello,

 

I'm just starting off and was wondering if there's any concrete way of setting the compression when writing to a file in Spark?

 

I used to use option when writing files:

 

exampleDF.write.option("compression", "snappy").avro("output path")

 

but when I go to check where the Avro files are saved I can't tell from the name of the files whether they've been compressed or not. Also just to say this is after I've imported "com.databricks.spark.avro._" so I'm not having any trouble using Avro files.

 

Another way I've seen is to use "sqlContext.setConf" and these would be the commands I'd use in this instance:

 

import org.apache.spark.sql.hive.HiveContext

val sqlContext = new HiveContext(sc)

sqlContext.setConf("spark.sql.avro.compression.codec", "snappy")

exampleDF.write.avro("output path")

 

Neither way causes any errors so I was wondering which would be the better way and if there are any other more reliable ways of setting the compression when writing files?

 

The version of Spark I'm using is Spark version 2.2.0

 

Thanks in advance

1 ACCEPTED SOLUTION

avatar
Master Guru

@RandomT 

You can check compression on .avro files using avro-tools

bash$ avro-tools getmeta <file_path>

 For more details refer to this link

-

sqlContext.setConf //sets global config and every write will be snappy compressed if you are writing all your data as snappy compressed then you should use this method.

-

In case if you are compressing only the selected data then use 

exampleDF.write.option("compression", "snappy").avro("output path") 

for better control over on compression.

View solution in original post

2 REPLIES 2

avatar
Master Guru

@RandomT 

You can check compression on .avro files using avro-tools

bash$ avro-tools getmeta <file_path>

 For more details refer to this link

-

sqlContext.setConf //sets global config and every write will be snappy compressed if you are writing all your data as snappy compressed then you should use this method.

-

In case if you are compressing only the selected data then use 

exampleDF.write.option("compression", "snappy").avro("output path") 

for better control over on compression.

avatar
New Contributor

Thanks for the info, was very helpful!