Created 01-23-2018 02:24 PM
I am running Spark job on Hortonworks cluster . In source we have Oracle DB . we are fetching records from Oracle and analyse.
I am getting below error message , I refer many solution accordingly I am using ojdbc7.jar . Please guide me on this
18/01/22 07:04:09 ERROR Executor: Exception in task 0.0 in stage 15.0 (TID 1069) java.sql.SQLException: Protocol violation: [ 6, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, ] at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:536) at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:257) at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:587) at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:225) at oracle.jdbc.driver.T4CPreparedStatement.fetch(T4CPreparedStatement.java:1066) at oracle.jdbc.driver.OracleStatement.fetchMoreRows(OracleStatement.java:3716) at oracle.jdbc.driver.InsensitiveScrollableResultSet.fetchMoreRows(InsensitiveScrollableResultSet.java:1015) at oracle.jdbc.driver.InsensitiveScrollableResultSet.absoluteInternal(InsensitiveScrollableResultSet.java:979) at oracle.jdbc.driver.InsensitiveScrollableResultSet.next(InsensitiveScrollableResultSet.java:579) at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1.getNext(JDBCRDD.scala:369) at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1.hasNext(JDBCRDD.scala:498) at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$class.foreach(Iterator.scala:727) at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) at org.apache.spark.sql.execution.joins.BroadcastNestedLoopJoin$$anonfun$2.apply(BroadcastNestedLoopJoin.scala:105) at org.apache.spark.sql.execution.joins.BroadcastNestedLoopJoin$$anonfun$2.apply(BroadcastNestedLoopJoin.scala:96) at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$22.apply(RDD.scala:717) at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$22.apply(RDD.scala:717) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313) at org.apache.spark.rdd.RDD.iterator(RDD.scala:277) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313) at org.apache.spark.rdd.RDD.iterator(RDD.scala:277) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at org.apache.spark.scheduler.Task.run(Task.scala:89) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) End of LogType:stderr. This log file belongs to a running container (container_e13_1503300866021_880451_01_000002) and so may not be complete. LogType:launch_container.sh Log Upload Time:Tue Jan 23 02:09:02 -0500 2018
Created 09-04-2019 11:20 PM
Did you find a solution?