Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

while creating data frame in spark Caused by: java.lang.ClassCastException: org.apache.hadoop.hive.serde2.io.TimestampWritable cannot be cast to org.apache.hadoop.io.LongWritable

Highlighted

while creating data frame in spark Caused by: java.lang.ClassCastException: org.apache.hadoop.hive.serde2.io.TimestampWritable cannot be cast to org.apache.hadoop.io.LongWritable

New Contributor

scala> val MYL_TMX_RegStatusHistoryDF = spark.sql("select * from database.table") MYL_TMX_RegStatusHistoryDF: org.apache.spark.sql.DataFrame = [regstatushistory_pk: int, regstatushistory_reg_fk: int ... 21 more fields] scala> MYL_TMX_RegStatusHistoryDF val MYL_TMX_RegStatusHistoryDF: org.apache.spark.sql.DataFrame scala> MYL_TMX_RegStatusHistoryDF val MYL_TMX_RegStatusHistoryDF: org.apache.spark.sql.DataFrame scala> MYL_TMX_RegStatusHistoryDF.show 18/12/14 01:22:06 ERROR Executor: Exception in task 0.0 in stage 3.0 (TID 8) java.lang.ClassCastException: org.apache.hadoop.hive.serde2.io.TimestampWritable cannot be cast to org.apache.hadoop.io.LongWritable at org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableLongObjectInspector.get(WritableLongObjectInspector.java:36) at org.apache.spark.sql.hive.HadoopTableReader$anonfun$14$anonfun$apply$6.apply(TableReader.scala:401) at org.apache.spark.sql.hive.HadoopTableReader$anonfun$14$anonfun$apply$6.apply(TableReader.scala:401) at org.apache.spark.sql.hive.HadoopTableReader$anonfun$fillObject$2.apply(TableReader.scala:442) at org.apache.spark.sql.hive.HadoopTableReader$anonfun$fillObject$2.apply(TableReader.scala:433) at scala.collection.Iterator$anon$11.next(Iterator.scala:409) at scala.collection.Iterator$anon$11.next(Iterator.scala:409) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$anonfun$10$anon$1.hasNext(WholeStageCodegenExec.scala:614) at org.apache.spark.sql.execution.SparkPlan$anonfun$2.apply(SparkPlan.scala:253) at org.apache.spark.sql.execution.SparkPlan$anonfun$2.apply(SparkPlan.scala:247) at org.apache.spark.rdd.RDD$anonfun$mapPartitionsInternal$1$anonfun$apply$25.apply(RDD.scala:830) at org.apache.spark.rdd.RDD$anonfun$mapPartitionsInternal$1$anonfun$apply$25.apply(RDD.scala:830) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at org.apache.spark.scheduler.Task.run(Task.scala:109) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 18/12/14 01:22:06 WARN TaskSetManager: Lost task 0.0 in stage 3.0 (TID 8, localhost, executor driver): java.lang.ClassCastException: org.apache.hadoop.hive.serde2.io.TimestampWritable cannot be cast to org.apache.hadoop.io.LongWritable at org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableLongObjectInspector.get(WritableLongObjectInspector.java:36) at org.apache.spark.sql.hive.HadoopTableReader$anonfun$14$anonfun$apply$6.apply(TableReader.scala:401) at org.apache.spark.sql.hive.HadoopTableReader$anonfun$14$anonfun$apply$6.apply(TableReader.scala:401)

Don't have an account?
Coming from Hortonworks? Activate your account here