I am using spark version 2.4.0. I know that Backslash is default escape character in spark but still I am facing below issue.
I am reading a csv file into a spark data frame (using pyspark language) and writing back the data frame into csv.
I have some "//" in my source csv file (as mentioned below), where first Backslash represent the escape character and second Backslash is the actual value.
Test.csv (Source Data)
--------
Col1,Col2,Col3,Col4
1,"abc//",xyz,Val2
2,"//",abc,Val2
I am reading the Test.csv file and creating dataframe using below piece of code:
df = sqlContext.read.format('com.databricks.spark.csv').schema(schema).option("escape", "\\").options(header='true').load("Test.csv")
And reading the df dataframe and writing back to Output.csv file using below code:
df.repartition(1).write.format('csv').option("emptyValue", empty).option("header", "false").option("escape", "\\").option("path", 'D:\TestCode\Output.csv').save(header = 'true')
Output.csv
----------
Col1,Col2,Col3,Col4
1,"abc//",xyz,Val2
2,/,abc,Val2
In 2nd row of Output.csv, escape character is getting lost along with the quotes("").
My requirement is to retain the escape character in output.csv as well. Any kind of help will be much appreciated.
Thanks in advance