Support Questions

Find answers, ask questions, and share your expertise

Parsing XML in Spark RDD

avatar
Expert Contributor

Hi Guys, We have a use cases to parse XML files using Spark RDD. Got some examples to use spark xml utils as per the link. https://github.com/databricks/spark-xml

There are some examples here. However can you guys also provide some sample code for this? Also can you please mention the how and external package can be added from spark-shell and pyspark? We are looking for your guidance. Thanks, Rajdip

1 ACCEPTED SOLUTION

avatar
Master Guru

see: https://github.com/databricks/spark-xml

Github has examples like

import org.apache.spark.sql.SQLContext

val sqlContext = new SQLContext(sc)
val df = sqlContext.read
    .format("com.databricks.spark.xml")
    .option("rowTag", "book")
    .load("books.xml")

val selectedData = df.select("author", "_id")
selectedData.write
    .format("com.databricks.spark.xml")
    .option("rootTag", "books")
    .option("rowTag", "book") 

.save("newbooks.xml")

Spark compiled with Scala 2.10

$SPARK_HOME/bin/spark-shell --packages com.databricks:spark-xml_2.10:0.4.1

View solution in original post

4 REPLIES 4

avatar
Master Guru

see: https://github.com/databricks/spark-xml

Github has examples like

import org.apache.spark.sql.SQLContext

val sqlContext = new SQLContext(sc)
val df = sqlContext.read
    .format("com.databricks.spark.xml")
    .option("rowTag", "book")
    .load("books.xml")

val selectedData = df.select("author", "_id")
selectedData.write
    .format("com.databricks.spark.xml")
    .option("rootTag", "books")
    .option("rowTag", "book") 

.save("newbooks.xml")

Spark compiled with Scala 2.10

$SPARK_HOME/bin/spark-shell --packages com.databricks:spark-xml_2.10:0.4.1

avatar
Expert Contributor

Do I need to add the data bricks package in spark class path? As I am new to spark so struggling to understand how to use the package. Also are there any other way without using the databricks package for parsing an XML and generating a CSV?

avatar
Super Collaborator

Like mentioned in the answer the command line to add the package to your job is

$SPARK_HOME/bin/spark-shell --packages com.databricks:spark-xml_2.10:0.4.1

Of course to write your project code you will also need to add this package to your project maven pom dependency. If you build an uber jar for your project that includes this package then you dont need to change your command line for submission.

There are many packages for spark that you can check at spark-packages.org.

avatar
New Contributor

@Timothy Spann....

do we not have a solution to parse/read xml without databricks package? I work on HDP 2.0+,Spark2.1 version.

I am trying to parse xml using pyspark code; manual parsing but I am having difficulty -when converting the list to a dataframe.

Any advice? Let me know; I can post the script here.

Thanks.

,

@Timothy Spann....

do we not have a solution to parse/read xml without databricks package? I work on HDP 2.0+,Spark2.1 version.

I am trying to parse xml using pyspark code; manual parsing but I am having difficulty -when converting the list to a dataframe.

Any advice? Let me know; I can post the script here.

Thanks.