Hi @drgenious
Are you getting a similar error which reported in KUDU-2633 It seems this is open JIRA reported in the community
ERROR core.JobRunShell: Job DEFAULT.EventKpisConsumer threw an unhandled Exception:
org.apache.spark.SparkException: Job aborted due to stage failure: Aborting TaskSet 109.0 because task 3 (partition 3) cannot run anywhere due to node and executor blacklist. Blacklisting behavior can be configured via spark.blacklist.*.
If you have the data in HDFS in (csv/avro/parquet) format, then you can use the below command to import the files to Kudu table.
Prerequisites: Kudu jar with compatible version (1.6 or higher) For more reference
spark2-submit --master yarn/local --class org.apache.kudu.spark.tools.ImportExportFiles <path of kudu jar>/kudu-spark2-tools_2.11-1.6.0.jar --operation=import --format=<parquet/avro/csv> --master-addrs=<kudu master host>:<port number> --path=<hdfs path for data> --table-name=impala::<table name>
Hope this helps. Please accept the answer and vote up if it did.