- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Parsing XML in Spark RDD
- Labels:
-
Apache Spark
Created ‎12-14-2016 11:42 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Guys, We have a use cases to parse XML files using Spark RDD. Got some examples to use spark xml utils as per the link. https://github.com/databricks/spark-xml
There are some examples here. However can you guys also provide some sample code for this? Also can you please mention the how and external package can be added from spark-shell and pyspark? We are looking for your guidance. Thanks, Rajdip
Created ‎12-14-2016 08:48 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
see: https://github.com/databricks/spark-xml
Github has examples like
import org.apache.spark.sql.SQLContext val sqlContext = new SQLContext(sc) val df = sqlContext.read .format("com.databricks.spark.xml") .option("rowTag", "book") .load("books.xml") val selectedData = df.select("author", "_id") selectedData.write .format("com.databricks.spark.xml") .option("rootTag", "books") .option("rowTag", "book")
.save("newbooks.xml")
Spark compiled with Scala 2.10
$SPARK_HOME/bin/spark-shell --packages com.databricks:spark-xml_2.10:0.4.1
Created ‎12-14-2016 08:48 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
see: https://github.com/databricks/spark-xml
Github has examples like
import org.apache.spark.sql.SQLContext val sqlContext = new SQLContext(sc) val df = sqlContext.read .format("com.databricks.spark.xml") .option("rowTag", "book") .load("books.xml") val selectedData = df.select("author", "_id") selectedData.write .format("com.databricks.spark.xml") .option("rootTag", "books") .option("rowTag", "book")
.save("newbooks.xml")
Spark compiled with Scala 2.10
$SPARK_HOME/bin/spark-shell --packages com.databricks:spark-xml_2.10:0.4.1
Created ‎12-15-2016 05:58 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Do I need to add the data bricks package in spark class path? As I am new to spark so struggling to understand how to use the package. Also are there any other way without using the databricks package for parsing an XML and generating a CSV?
Created ‎12-15-2016 08:02 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Like mentioned in the answer the command line to add the package to your job is
$SPARK_HOME/bin/spark-shell --packages com.databricks:spark-xml_2.10:0.4.1
Of course to write your project code you will also need to add this package to your project maven pom dependency. If you build an uber jar for your project that includes this package then you dont need to change your command line for submission.
There are many packages for spark that you can check at spark-packages.org.
Created ‎02-07-2019 08:47 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Timothy Spann....
do we not have a solution to parse/read xml without databricks package? I work on HDP 2.0+,Spark2.1 version.
I am trying to parse xml using pyspark code; manual parsing but I am having difficulty -when converting the list to a dataframe.
Any advice? Let me know; I can post the script here.
Thanks.
,@Timothy Spann....
do we not have a solution to parse/read xml without databricks package? I work on HDP 2.0+,Spark2.1 version.
I am trying to parse xml using pyspark code; manual parsing but I am having difficulty -when converting the list to a dataframe.
Any advice? Let me know; I can post the script here.
Thanks.
