Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

PySpark + YARN + Kerberos = Chaos?

avatar
Explorer

Hi folks,

 

We have Cloudera Enterprise edition configured on our servers (YARN, Spark History server and the usual suspects). I'm able to run Spark jobs and connect to Hive using the Kerberos credentials on the edge node by simply typing `pyspark`.

 

Now here is the catch: there seems to be no tutorial/code snippet out there which shows how to run a standalone Python script on a client windows box, esp when we throw Kerberos and YARN in the mix. Pretty much all code snippets show:

 

from pyspark import SparkConf, SparkContext, HiveContext
conf = (SparkConf()
         .setMaster("local")
         .setAppName("My app")
         .set("spark.executor.memory", "1g"))
sc = SparkContext(conf = conf)
hc = HiveContext(sc)
# Do stuff

It's worth noting there is no snippet out there specifying Kerberos authentication code + showing how Hive parameters are configured. Could someone please provide a snippet which allows me to submit Hive queries to Spark cluster using YARN with Kerberos authentication enabled?

1 ACCEPTED SOLUTION

avatar
Expert Contributor

You will need to have Spark authenticate via Kerberos.  This can be done by specifying correct properties on command line: https://www.cloudera.com/documentation/enterprise/5-7-x/topics/sg_spark_auth.html

View solution in original post

6 REPLIES 6

avatar
Expert Contributor

You will need to have Spark authenticate via Kerberos.  This can be done by specifying correct properties on command line: https://www.cloudera.com/documentation/enterprise/5-7-x/topics/sg_spark_auth.html

avatar
Explorer

Thanks for the reply; your solution works too.

 

In my case, it was simply solved by having an active kerberos session and running the spark job using spark-submit; no additional properties required.

avatar
Expert Contributor

Hello Experts,

 

I am looking for sample Python code which can initiate a kerberos ticket and impersonate a user within the code to access webhdfs or webhcat. I found some java examples like http://dewoods.com/blog/hadoop-kerberos-guide but looking for similar python code.

 

The below python code handles kerberos but doesnt do impersonation:

 

import httplib
import requests
import json
from requests_kerberos import HTTPKerberosAuth, REQUIRED

 

kerberos_auth = HTTPKerberosAuth(mutual_authentication=REQUIRED, sanitize_mutual_error_response=False)

 

webhdfs_url = “http://namenode:50070/webhdfs/v1/tmp?op=LISTSTATUS 
headers = { ‘X-Requested-By’: ‘someuser’}

 

response = requests.get(webhdfs_url, headers=headers, auth=kerberos_auth, verify=False)

 

print “webhdfs response statuscode=”, response.status_code
print “webhdfs response responsetext=”, response.text

 

 

Thanks!

avatar
Expert Contributor

As this question has already been marked resolved and you are looking for python examples instead of pyspark, you may want to ask in a new question.

 

But, you may also want to look at the various python libraries that already implement functionality to access HDFS data.

avatar
Explorer

@hubbarja

Hello,

 

I decided not to open a new topic, but I'm currently facing issues when trying to connect pyspark with a HBase with Kerberos.

 

The following code works if I shutdown Kerberos in HBase:

 

%pyspark

host = 'hostname'
tablename = 'Test:Test2'

conf = {"hbase.zookeeper.quorum": host, "hbase.mapreduce.inputtable": tablename}

keyConv = "org.apache.spark.examples.pythonconverters.ImmutableBytesWritableToStringConverter"
valueConv = "org.apache.spark.examples.pythonconverters.HBaseResultToStringConverter"

hbase_rdd = sc.newAPIHadoopRDD("org.apache.hadoop.hbase.mapreduce.TableInputFormat","org.apache.hadoop.hbase.io.ImmutableBytesWritable","org.apache.hadoop.hbase.client.Result",keyConverter=keyConv,valueConverter=valueConv,conf=conf)

hbase_rdd.collect()

 

The following error is thrown with Kerberos on

 

An error occurred while calling z:org.apache.spark.api.python.PythonRDD.newAPIHadoopRDD.
: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=32, exceptions:
Mon Aug 06 11:36:55 UTC 2018, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=68623: row 'Test:Test2,,00000000000000' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=hostname,60020,1533550276857, seqNum=0

Best regards,

Gil Pinheiro

 

avatar
Explorer

Any suggestion to above request?