Yes, you have to use foreachRDD from https://stackoverflow.com/questions/44088090/spark-streaming-saving-data-to-mysql-with-foreachrdd-in... // JDBC writer configuration val connectionProperties = new Properties() connectionProperties.put("user", "root") connectionProperties.put("password", "*****") structuredData.foreachRDD { rdd => val df = rdd.toDF() // create a dataframe from the schema RDD df.write.mode("append") .jdbc("jdbc:mysql://192.168.100.8:3306/hadoopguide", "topics", connectionProperties) }