Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

How do you write a RDD as a tab delimited file in pyspark?

Solved Go to solution
Highlighted

How do you write a RDD as a tab delimited file in pyspark?

I have a RDD I'd like to write as tab delimited. I also want to write a data frame as tab delimited. How do I do this?

1 ACCEPTED SOLUTION

Accepted Solutions

Re: How do you write a RDD as a tab delimited file in pyspark?

Contributor

Try this, but this version is for version 1.5 and up

data.write.format('com.databricks.spark.csv').options(delimiter="\t", codec="org.apache.hadoop.io.compress.GzipCodec").save('s3a://myBucket/myPath')

View solution in original post

6 REPLIES 6
Highlighted

Re: How do you write a RDD as a tab delimited file in pyspark?

Super Collaborator

Is your RDD an RDD of strings?

On the second part of the question, if you are using the spark-csv, the package supports saving simple (non-nested) DataFrame. There is an option to specify the delimiter which is , by default but can be changed.

eg - .save('filename.csv', 'com.databricks.spark.csv',delimiter="DELIM")

Highlighted

Re: How do you write a RDD as a tab delimited file in pyspark?

Super Guru

@Binu Mathew do you have any thoughts?

Highlighted

Re: How do you write a RDD as a tab delimited file in pyspark?

Super Collaborator

Could you provide more details on the your RDD that you would like to save tab delimited? On the question about storing the DataFrames as a tab delimited file, below is what I have in scala using the package spark-csv

df.write.format("com.databricks.spark.csv").option("delimiter", "\t").save("output path")

EDIT With the RDD of tuples, as you mentioned, either you could join by "\t" on the tuple or use mkString if you prefer not to use an additional library. On your RDD of tuple you could do something like

.map { x =>x.productIterator.mkString("\t") }.saveAsTextFile("path-to-store")

@Don Jernigan

Highlighted

Re: How do you write a RDD as a tab delimited file in pyspark?

Essentially I have a python tuple ('a','b','c','x','y','z') that are all strings. I could just map them into a single concatenation of ('a\tb\tc\tx\ty\tz'), then saveAsTextFile(path). But I was wondering if there was a better way than using an external package which could just be encapsulating that .map(lambda x: "\t'".join(x) ).

Highlighted

Re: How do you write a RDD as a tab delimited file in pyspark?

Super Collaborator

I guess, if the data set does not contain a '\t' char then '\t'.join and saveAsTextFile should work for you. Else, you just need to wrap the strings within " as with normal CSVs.

Re: How do you write a RDD as a tab delimited file in pyspark?

Contributor

Try this, but this version is for version 1.5 and up

data.write.format('com.databricks.spark.csv').options(delimiter="\t", codec="org.apache.hadoop.io.compress.GzipCodec").save('s3a://myBucket/myPath')

View solution in original post

Don't have an account?
Coming from Hortonworks? Activate your account here