Member since
12-16-2017
1
Post
0
Kudos Received
0
Solutions
05-10-2022
11:40 AM
Hello Cloudera Team, I have a requirement read hdfs file from kerberized cluster in spark local mode when i try this i am facing Client cannot authenticate via:[TOKEN, KERBEROS]. This was the API i implemented , public static void main(String[] args) { SparkConf conf = new SparkConf().setAppName("spark-ml").setMaster("local[1]") .set("hadoop.security.authentication", "KERBEROS") .set("spark.hadoop.fs.defaultFS","hdfs://server35:8020") .set("hadoop.rpc.protection", "privacy") .set("hadoop.security.authorization", "true") .set("spark.history.kerberos.enabled", "true") .set("spark.kerberos.keytab", "D:\\resources\\ruleuser.keytab") .set("spar.kerberos.principal", "ruleuser/server35@HADOOP.COM") .set("principal", "ruleuser/server35@HADOOP.COM") .set("keytab","D:\\resources\\ruleuser.keytab") .set("spark.files", "D:\\resources\\core-site.xml,D:\\resources\\hdfs-site.xml") .set("class", "Main2"); SparkContext context = new SparkContext(conf); context.addFile("D:\\resources\\", true); SparkSession session = SparkSession.builder().sparkContext(context).getOrCreate(); Dataset<Row> df = session.read().format("csv").option("header", true) .load("hdfs://server35:8020/user/ruleuser/seahoarse/file123.csv"); df.show(5); } Will it possible in local mode without run kinit command before spark-submit. Thanks, Ajay Babu Maguluri.
... View more
Labels:
- Labels:
-
Apache Spark