<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Spark Streaming save output to mysql DB in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-Streaming-save-output-to-mysql-DB/m-p/25607#M5324</link>
    <description>&lt;P&gt;i working on spark streaming context "word count example" , so is it possible to store the output RDD into MYSQL database using&amp;nbsp;bulk insertion &amp;nbsp;in JDBC ? and if its possible is there any examples for it ?&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks in advance&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Fri, 16 Sep 2022 09:24:19 GMT</pubDate>
    <dc:creator>tarekabouzeid91</dc:creator>
    <dc:date>2022-09-16T09:24:19Z</dc:date>
    <item>
      <title>Spark Streaming save output to mysql DB</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-Streaming-save-output-to-mysql-DB/m-p/25607#M5324</link>
      <description>&lt;P&gt;i working on spark streaming context "word count example" , so is it possible to store the output RDD into MYSQL database using&amp;nbsp;bulk insertion &amp;nbsp;in JDBC ? and if its possible is there any examples for it ?&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks in advance&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 16 Sep 2022 09:24:19 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-Streaming-save-output-to-mysql-DB/m-p/25607#M5324</guid>
      <dc:creator>tarekabouzeid91</dc:creator>
      <dc:date>2022-09-16T09:24:19Z</dc:date>
    </item>
    <item>
      <title>Re: Spark Streaming save output to mysql DB</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-Streaming-save-output-to-mysql-DB/m-p/25608#M5325</link>
      <description>&lt;P&gt;Yes, perfectly possible. It's not specific to Spark Streaming or even Spark; you'd just use foreachPartition to create and execute a SQL statement via JDBC over a batch of records. The code is just normal JDBC code.&lt;/P&gt;</description>
      <pubDate>Mon, 16 Mar 2015 11:51:03 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-Streaming-save-output-to-mysql-DB/m-p/25608#M5325</guid>
      <dc:creator>srowen</dc:creator>
      <dc:date>2015-03-16T11:51:03Z</dc:date>
    </item>
    <item>
      <title>Re: Spark Streaming save output to mysql DB</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-Streaming-save-output-to-mysql-DB/m-p/25609#M5326</link>
      <description>&lt;P&gt;okey great ! thanks so much , but can you provide an example for it , if you please ?&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 16 Mar 2015 11:53:55 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-Streaming-save-output-to-mysql-DB/m-p/25609#M5326</guid>
      <dc:creator>tarekabouzeid91</dc:creator>
      <dc:date>2015-03-16T11:53:55Z</dc:date>
    </item>
    <item>
      <title>Re: Spark Streaming save output to mysql DB</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-Streaming-save-output-to-mysql-DB/m-p/25676#M5327</link>
      <description>&lt;P&gt;i managed to insert RDD into mysql database ! thanks so much here's a sample code if anyone needs it :&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;val r = sc.makeRDD(1 to 4)&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;r2.foreachPartition {&lt;/P&gt;&lt;P&gt;it =&amp;gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; val conn= DriverManager.getConnection(url,username,password)&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; val del = conn.prepareStatement ("INSERT INTO tweets (ID,Text) VALUES (?,?) ")&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; for (bookTitle &amp;lt;-it)&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;{&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; del.setString(1,bookTitle.toString)&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; del.setString(2,"my input")&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; del.executeUpdate&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; }&lt;/P&gt;&lt;P&gt;}&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 18 Mar 2015 10:25:12 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-Streaming-save-output-to-mysql-DB/m-p/25676#M5327</guid>
      <dc:creator>tarekabouzeid91</dc:creator>
      <dc:date>2015-03-18T10:25:12Z</dc:date>
    </item>
    <item>
      <title>Re: Spark Streaming save output to mysql DB</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-Streaming-save-output-to-mysql-DB/m-p/25678#M5328</link>
      <description>&lt;P&gt;i am using apache spark to collect tweets using twitter4j , then want to save the data into mysql database ,&lt;BR /&gt;i created table in mysql DB with ID , createdat, source,text , location&lt;BR /&gt;and here's the code i used " i modified an twitter4j example "&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;import org.apache.spark.streaming.{Seconds, StreamingContext}&lt;BR /&gt;import org.apache.spark.SparkContext._&lt;BR /&gt;import org.apache.spark.streaming.twitter._&lt;BR /&gt;import org.apache.spark.SparkConf&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;import org.apache.spark.SparkContext._&lt;BR /&gt;import org.apache.spark._&lt;BR /&gt;import org.apache.spark.streaming._&lt;BR /&gt;import org.apache.spark.streaming.StreamingContext._&lt;BR /&gt;import org.apache.spark.storage.StorageLevel&lt;BR /&gt;import org.apache.spark.util.IntParam&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;import org.apache.spark.rdd.JdbcRDD&lt;BR /&gt;import java.sql.{Connection, DriverManager, ResultSet}&lt;/P&gt;&lt;P&gt;object TwitterPopularTags {&lt;BR /&gt;def main(args: Array[String]) {&lt;BR /&gt;val filters = args.takeRight(args.length)&lt;BR /&gt;System.setProperty("twitter4j.oauth.consumerKey", "H2XXXXXX" )&lt;BR /&gt;System.setProperty("twitter4j.oauth.consumerSecret", "WjOXXXXXXXXX" )&lt;BR /&gt;System.setProperty("twitter4j.oauth.accessToken", "22XXX")&lt;BR /&gt;System.setProperty("twitter4j.oauth.accessTokenSecret", "vRXXXXXX")&lt;BR /&gt;val url = "jdbc:mysql://192.168.4.45:3306/twitter"&lt;BR /&gt;val username = "root"&lt;BR /&gt;val password = "123456"&lt;BR /&gt;Class.forName("com.mysql.jdbc.Driver").newInstance&lt;BR /&gt;val sparkConf = new SparkConf().setAppName("TwitterPopularTags")&lt;BR /&gt;val ssc = new StreamingContext(sparkConf, Seconds(2))&lt;BR /&gt;val stream = TwitterUtils.createStream(ssc, None, filters)&lt;BR /&gt;println("CURRENT CONFIGURATION:"+System.getProperties().get("twitter4j.oauth.accessTokenSecret"))&lt;BR /&gt;val tweets_toprint = stream.map(tuple =&amp;gt; "%s,%s,%s,%s,%s".format(tuple.getId, tuple.getCreatedAt,tuple.getSource, tuple.getText.toLowerCase.replaceAll(",", " "),tuple.getGeoLocation)).print&lt;BR /&gt;val tweets = stream.foreachRDD&lt;BR /&gt;{&lt;BR /&gt;rdd =&amp;gt; rdd.foreachPartition {&lt;BR /&gt;&lt;BR /&gt;it =&amp;gt;&lt;BR /&gt;val conn = DriverManager.getConnection(url,username,password)&lt;BR /&gt;val del = conn.prepareStatement("INSERT INTO tweets (ID,CreatedAt,Source,Text,GeoLocation) VALUES (?,?,?,?,?)")&lt;BR /&gt;for (tuple &amp;lt;- it) {&lt;BR /&gt;del.setLong (1, tuple.getId)&lt;BR /&gt;del.setString(2, tuple.getCreatedAt.toString)&lt;BR /&gt;del.setString(3, tuple.getSource)&lt;BR /&gt;del.setString(4, tuple.getText)&lt;BR /&gt;del.setString(5, tuple.getGeoLocation.toString)&lt;BR /&gt;&lt;BR /&gt;del.executeUpdate&lt;BR /&gt;}&lt;BR /&gt;conn.close()&lt;BR /&gt;}&lt;BR /&gt;}&lt;BR /&gt;ssc.start()&lt;BR /&gt;ssc.awaitTermination()&lt;BR /&gt;}&lt;BR /&gt;}&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;i can submit the job normally on spark and it prints out some tweets according to my filter , but i can't write data into the DB and get a NULL pointer error when i receive a tweet, here it is :&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;STRONG&gt;15/03/18 13:18:17 WARN TaskSetManager: Lost task 0.0 in stage 7.0 (TID 82, node2.com): java.lang.NullPointerException&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at TwitterPopularTags$$anonfun$2$$anonfun$apply$1$$anonfun$apply$2.apply(PopularHashTags.scala:55)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at TwitterPopularTags$$anonfun$2$$anonfun$apply$1$$anonfun$apply$2.apply(PopularHashTags.scala:50)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at scala.collection.Iterator$class.foreach(Iterator.scala:727)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.apache.spark.util.NextIterator.foreach(NextIterator.scala:21)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at TwitterPopularTags$$anonfun$2$$anonfun$apply$1.apply(PopularHashTags.scala:50)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at TwitterPopularTags$$anonfun$2$$anonfun$apply$1.apply(PopularHashTags.scala:47)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:773)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:773)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1314)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1314)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.apache.spark.scheduler.Task.run(Task.scala:56)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at java.lang.Thread.run(Thread.java:744)&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;and get this error when no tweets received :&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;15/03/18 13:18:18 ERROR JobScheduler: Error running job streaming job 1426673896000 ms.1&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 7.0 failed 4 times, most recent failure: Lost task 0.3 in stage 7.0 (TID 85, node2.com): java.lang.NullPointerException&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at TwitterPopularTags$$anonfun$2$$anonfun$apply$1$$anonfun$apply$2.apply(PopularHashTags.scala:55)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at TwitterPopularTags$$anonfun$2$$anonfun$apply$1$$anonfun$apply$2.apply(PopularHashTags.scala:50)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at scala.collection.Iterator$class.foreach(Iterator.scala:727)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.apache.spark.util.NextIterator.foreach(NextIterator.scala:21)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at TwitterPopularTags$$anonfun$2$$anonfun$apply$1.apply(PopularHashTags.scala:50)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at TwitterPopularTags$$anonfun$2$$anonfun$apply$1.apply(PopularHashTags.scala:47)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:773)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:773)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1314)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1314)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.apache.spark.scheduler.Task.run(Task.scala:56)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at java.lang.Thread.run(Thread.java:744)&lt;/STRONG&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt;Driver stacktrace:&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1214)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1203)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1202)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1202)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:696)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:696)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at scala.Option.foreach(Option.scala:236)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:696)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1420)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at akka.actor.ActorCell.invoke(ActorCell.scala:456)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at akka.dispatch.Mailbox.run(Mailbox.scala:219)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;i guess the 2nd error is normal to abort the job if no tweets received but i don't know what i am doing wrong to get the first error when tweet is recieved , i managed to normally insert an RDD into mysql DB normally in spark shell ,&lt;/P&gt;&lt;P&gt;thanks in advance&lt;/P&gt;</description>
      <pubDate>Wed, 18 Mar 2015 10:44:23 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-Streaming-save-output-to-mysql-DB/m-p/25678#M5328</guid>
      <dc:creator>tarekabouzeid91</dc:creator>
      <dc:date>2015-03-18T10:44:23Z</dc:date>
    </item>
    <item>
      <title>Re: Spark Streaming save output to mysql DB</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-Streaming-save-output-to-mysql-DB/m-p/25681#M5329</link>
      <description>&lt;P&gt;Looks good although I would recommend closing the statement and connection too.&lt;/P&gt;&lt;P&gt;Also, you're executing an update for every datum. JDBC as an addBatch / executeBatch interface too I think? might be faster.&lt;/P&gt;</description>
      <pubDate>Wed, 18 Mar 2015 10:51:41 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-Streaming-save-output-to-mysql-DB/m-p/25681#M5329</guid>
      <dc:creator>srowen</dc:creator>
      <dc:date>2015-03-18T10:51:41Z</dc:date>
    </item>
    <item>
      <title>Re: Spark Streaming save output to mysql DB</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-Streaming-save-output-to-mysql-DB/m-p/78373#M5330</link>
      <description>How do i implement the same in pyspark ?</description>
      <pubDate>Mon, 13 Aug 2018 13:07:43 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-Streaming-save-output-to-mysql-DB/m-p/78373#M5330</guid>
      <dc:creator>Mrjack</dc:creator>
      <dc:date>2018-08-13T13:07:43Z</dc:date>
    </item>
    <item>
      <title>Re: Spark Streaming save output to mysql DB</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-Streaming-save-output-to-mysql-DB/m-p/311277#M5331</link>
      <description>&lt;P&gt;same query how to use writestream.jdbc in pyspark?&lt;/P&gt;</description>
      <pubDate>Wed, 10 Feb 2021 07:19:19 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-Streaming-save-output-to-mysql-DB/m-p/311277#M5331</guid>
      <dc:creator>oy</dc:creator>
      <dc:date>2021-02-10T07:19:19Z</dc:date>
    </item>
  </channel>
</rss>

