Reply
Highlighted
Master
Posts: 402
Registered: ‎07-01-2015

Access remote FS in spark2 shell

[ Edited ]

Hi, 

 I am not able to launch spark2 shell and access a remote file system in a setup where cluster A and cluster B trusts each other. 

 

 

kinit etluser@DEV.REALM
spark2-shell --conf spark.yarn.access.hadoopFileSystems=hdfs://ip-10-85-54-144.eu-west-1.compute.internal:8020 --conf spark.authenticate=true --files /etc/hadoop/conf/hdfs-site.xml,/etc/hadoop/conf/core-site.xml

to adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
...
19/02/20 17:09:03 ERROR repl.Main: Failed to initialize Spark session.
org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit application_1550671370377_0105 to YARN : Failed to renew token: Kind: HDFS_DELEGATION_TOKEN, Service: 10.85.54.144:8020, Ident: (token for etluser: HDFS_DELEGATION_TOKEN owner=etluser@DEV.REALM

The 10.85.54.144:8020 is the remote NameNode.

 

 

Any ideas how to change the parameters of the spark2-shell command?

 

Thanks

 

Announcements