Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

How to write data from dStream into permanent Hive table

avatar

Hello,

  I tried to make a simple application in Spark Streaming which reads every 5s new data from HDFS and simply inserts into a Hive table. On the official Spark web site I have found an example, how to perform SQL operations on DStream data, via foreachRDD function, but the catch is, that the example used sqlContext and transformed the data from RDD to DataFrame.  The problem is, that with this DF, the data cannot be saved (appended) to an existing permanent Hive table. HiveContext has to be created.

 

So I tried this program, it works, but fails after a while, because runs out of memory, because it creates every time a new HiveContext object.

 

I tried to create the HiveContext BEFORE the map, and broadcast it, but it failed.

I tried to call getOrCreate, which works fine with sqlContext but not with hiveContext.

 

Any ideas?

Thanks

 

Tomas

 

Snímka.PNG

 

 

 

val sparkConf = new SparkConf().setAppName("StreamHDFSdata")
sparkConf.set("spark.dynamicAllocation.enabled","false")
val ssc = new StreamingContext(sparkConf, Seconds(5))
ssc.checkpoint("/user/hdpuser/checkpoint")
val sc = ssc.sparkContext


val smDStream = ssc.textFileStream("/user/hdpuser/data")
val smSplitted = smDStream.map( x => x.split(";") ).map( x => Row.fromSeq( x ) )
val smStruct = StructType( (0 to 10).toList.map( x => "col"+x.toString).map( y => StructField( y , StringType, true ) ) )

//val hiveCx = new org.apache.spark.sql.hive.HiveContext(sc)
//val sqlBc = sc.broadcast( hiveCx )

smSplitted.foreachRDD( rdd => {
//val sqlContext = SQLContext.getOrCreate(rdd.sparkContext) --> sqlContext cannot be used for permanent table create
val sqlContext = new org.apache.spark.sql.hive.HiveContext(rdd.sparkContext)
//val sqlContext = sqlBc.value --> THIS DOES NOT WORK: fail during runtime
//val sqlContext = new HiveContext.getOrCreate(rdd.sparkContext) --> THIS DOES NOT WORK EITHER: fail during runtime

//import hiveCx.implicits._
val smDF = sqlContext.createDataFrame( rdd, smStruct )
//val smDF = rdd.toDF
smDF.registerTempTable("sm")
val smTrgPart = sqlContext.sql("insert into table onlinetblsm select * from sm")
smTrgPart.write.mode(SaveMode.Append).saveAsTable("onlinetblsm")
} )

 

1 ACCEPTED SOLUTION

avatar
In the meantime I figured out one possible solution, which seems to be stable and not running out of memory. The hivecontext has to be created outside in a singleton object.

View solution in original post

6 REPLIES 6

avatar
In the meantime I figured out one possible solution, which seems to be stable and not running out of memory. The hivecontext has to be created outside in a singleton object.

avatar
New Contributor

Can you please share your code.
Thanks.

avatar
Rising Star

Did this work for you?

If not, please post the code which worked for you

avatar
import org.apache.spark.{SparkConf,SparkContext}
import org.apache.spark.SparkContext._
import org.apache.spark.storage.StorageLevel
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.log4j.{Level, Logger}
import org.apache.spark.sql._
import org.apache.spark.sql.types.{StructType,StructField,StringType,IntegerType,TimestampType,LongType,DoubleType,DataType}
import org.apache.spark.sql.Row
import java.io.File
import com.typesafe.config.{Config, ConfigFactory}
import org.apache.spark.sql.hive.HiveContext

object SQLHiveContextSingleton {
    @transient private var instance: HiveContext = _
    def getInstance(sparkContext: SparkContext): HiveContext = {
        synchronized {
              if (instance == null ) {
                instance = new HiveContext(sparkContext)
            }
            instance
        }
    }
}


......


val mydataSplitted = mydataDStream.map( .... )

// saving the content of mydataSplitted dstream of RDD in Hive table

 mydataSplitted.foreachRDD( rdd => {
        println("Processing mydata RDD")
        val sqlContext = SQLHiveContextSingleton.getInstance( rdd.sparkContext )
        val mydataDF = sqlContext.createDataFrame( rdd, mydataStruct )
        mydataDF.registerTempTable("mydata")
        val mydataTrgPart = sqlContext.sql(mydataSQL)
        sqlContext.sql("SET hive.exec.dynamic.partition = true;")
        sqlContext.sql("SET hive.exec.dynamic.partition.mode = nonstrict;")
        mydataTrgPart.write.mode(SaveMode.Append).partitionBy(partCol).saveAsTable(mydataTable)
    } )

avatar
Rising Star

Can i know which versions of hive and spark you are using?

avatar
New Contributor

Thanks for sharing the code of your solution.
I've also found that just making HiveContext variable lazy works:

val sparkConf = new SparkConf().setAppName("StreamHDFSdata")
sparkConf.set("spark.dynamicAllocation.enabled","false")
val ssc = new StreamingContext(sparkConf, Seconds(5))
ssc.checkpoint("/user/hdpuser/checkpoint")
val sc = ssc.sparkContext

val smDStream = ssc.textFileStream("/user/hdpuser/data")
val smSplitted = smDStream.map( x => x.split(";") ).map( x => Row.fromSeq( x ) )
...

lazy val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc)

smSplitted.foreachRDD( rdd => {
// use sqlContext  here
} )