I am trying to run the spark job with hive support enabled. it can run the command "show databases" successfully but when it try to read hive table ( which had data stored as txt on hdfs ) is showing org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block:....
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 3.0 failed 4 times, most recent failure:
Lost task 0.3 in stage 3.0 (TID 6, 192.168.8.134, executor 0): org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: <bloac ID >> =<<path>>
at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:984)
at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:642)
at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:882)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:934)
Here are the details of my dev environment:
Dev box (A) (centOS running in vmware ) with eclipse added jars from spark 2.2.1 with hadoop 2.7 support.
Dev box (A) is running with spark master and slave configured with thrift server on Dev box (B) . Dev box (B) is running HDP 2.5 hortonworks.
So, why the app running in dev box (A) is throwing the missing block exception when it try to query hive table , even if the file is present in hdfs ?
Please note I have already executed the following command to check for blocks.
sudo -u hdfs hdfs dfsadmin -report
sudo -u hdfs hdfs fsck -list-corruptfileblocks
Thanks for any help!