Created on 06-23-2016 07:55 AM - edited 09-16-2022 03:27 AM
Hello,
I tried to make a simple application in Spark Streaming which reads every 5s new data from HDFS and simply inserts into a Hive table. On the official Spark web site I have found an example, how to perform SQL operations on DStream data, via foreachRDD function, but the catch is, that the example used sqlContext and transformed the data from RDD to DataFrame. The problem is, that with this DF, the data cannot be saved (appended) to an existing permanent Hive table. HiveContext has to be created.
So I tried this program, it works, but fails after a while, because runs out of memory, because it creates every time a new HiveContext object.
I tried to create the HiveContext BEFORE the map, and broadcast it, but it failed.
I tried to call getOrCreate, which works fine with sqlContext but not with hiveContext.
Any ideas?
Thanks
Tomas
val sparkConf = new SparkConf().setAppName("StreamHDFSdata")
sparkConf.set("spark.dynamicAllocation.enabled","false")
val ssc = new StreamingContext(sparkConf, Seconds(5))
ssc.checkpoint("/user/hdpuser/checkpoint")
val sc = ssc.sparkContext
val smDStream = ssc.textFileStream("/user/hdpuser/data")
val smSplitted = smDStream.map( x => x.split(";") ).map( x => Row.fromSeq( x ) )
val smStruct = StructType( (0 to 10).toList.map( x => "col"+x.toString).map( y => StructField( y , StringType, true ) ) )
//val hiveCx = new org.apache.spark.sql.hive.HiveContext(sc)
//val sqlBc = sc.broadcast( hiveCx )
smSplitted.foreachRDD( rdd => {
//val sqlContext = SQLContext.getOrCreate(rdd.sparkContext) --> sqlContext cannot be used for permanent table create
val sqlContext = new org.apache.spark.sql.hive.HiveContext(rdd.sparkContext)
//val sqlContext = sqlBc.value --> THIS DOES NOT WORK: fail during runtime
//val sqlContext = new HiveContext.getOrCreate(rdd.sparkContext) --> THIS DOES NOT WORK EITHER: fail during runtime
//import hiveCx.implicits._
val smDF = sqlContext.createDataFrame( rdd, smStruct )
//val smDF = rdd.toDF
smDF.registerTempTable("sm")
val smTrgPart = sqlContext.sql("insert into table onlinetblsm select * from sm")
smTrgPart.write.mode(SaveMode.Append).saveAsTable("onlinetblsm")
} )
Created 07-13-2016 08:01 AM
Created 07-13-2016 08:01 AM
Created 01-03-2017 07:30 AM
Can you please share your code.
Thanks.
Created 01-14-2017 09:33 PM
Did this work for you?
If not, please post the code which worked for you
Created 01-16-2017 01:35 AM
import org.apache.spark.{SparkConf,SparkContext} import org.apache.spark.SparkContext._ import org.apache.spark.storage.StorageLevel import org.apache.spark.streaming.{Seconds, StreamingContext} import org.apache.log4j.{Level, Logger} import org.apache.spark.sql._ import org.apache.spark.sql.types.{StructType,StructField,StringType,IntegerType,TimestampType,LongType,DoubleType,DataType} import org.apache.spark.sql.Row import java.io.File import com.typesafe.config.{Config, ConfigFactory} import org.apache.spark.sql.hive.HiveContext object SQLHiveContextSingleton { @transient private var instance: HiveContext = _ def getInstance(sparkContext: SparkContext): HiveContext = { synchronized { if (instance == null ) { instance = new HiveContext(sparkContext) } instance } } } ...... val mydataSplitted = mydataDStream.map( .... ) // saving the content of mydataSplitted dstream of RDD in Hive table mydataSplitted.foreachRDD( rdd => { println("Processing mydata RDD") val sqlContext = SQLHiveContextSingleton.getInstance( rdd.sparkContext ) val mydataDF = sqlContext.createDataFrame( rdd, mydataStruct ) mydataDF.registerTempTable("mydata") val mydataTrgPart = sqlContext.sql(mydataSQL) sqlContext.sql("SET hive.exec.dynamic.partition = true;") sqlContext.sql("SET hive.exec.dynamic.partition.mode = nonstrict;") mydataTrgPart.write.mode(SaveMode.Append).partitionBy(partCol).saveAsTable(mydataTable) } )
Created 01-16-2017 09:02 PM
Can i know which versions of hive and spark you are using?
Created 05-18-2017 02:07 AM
Thanks for sharing the code of your solution.
I've also found that just making HiveContext variable lazy works:
val sparkConf = new SparkConf().setAppName("StreamHDFSdata") sparkConf.set("spark.dynamicAllocation.enabled","false") val ssc = new StreamingContext(sparkConf, Seconds(5)) ssc.checkpoint("/user/hdpuser/checkpoint") val sc = ssc.sparkContext val smDStream = ssc.textFileStream("/user/hdpuser/data") val smSplitted = smDStream.map( x => x.split(";") ).map( x => Row.fromSeq( x ) ) ... lazy val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc) smSplitted.foreachRDD( rdd => { // use sqlContext here } )