Support Questions

Find answers, ask questions, and share your expertise

Getting error "User session not found 403" when using zeppelin with livy

avatar
Rising Star

So we have a kerberized cluster and have integrated ldaps with zeppelin. But when i try to use livy i get the following error

ERROR [2016-12-01 10:00:02,442] ({pool-2-thread-4} LivyHelper.java[createSession]:128) - Error getting session for user java.lang.Exception: Cannot start spark. at org.apache.zeppelin.livy.LivyHelper.createSession(LivyHelper.java:117) at org.apache.zeppelin.livy.LivySparkSQLInterpreter.interpret(LivySparkSQLInterpreter.java:62) at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:94) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:341) at org.apache.zeppelin.scheduler.Job.run(Job.java:176) at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) ERROR [2016-12-01 10:00:02,442] ({pool-2-thread-4} LivySparkSQLInterpreter.java[interpret]:70) - Exception in LivySparkSQLInterpreter while interpret java.lang.Exception: Cannot start spark.

in zeppelin interpreter logs.

Also when i dont enable ldap for zeppelin (that is i log in as anonymous), I can see that job is been submitted to yarn but it fails as user is `zeppelin-clusterName`. But when i use ldap with zeppelin it is not even getting submit to yarn.

1 ACCEPTED SOLUTION

avatar
Rising Star

So guys @kbadani and @jzhang the issue finally got solved and thank you for your support. It was very trivial but very important property, livy.superuser was set to `zeppelin-<cluster name in uppercase>` but the principal was in lower case 😛 . And changing that solved the issue, @kbadani You pointed out that property but I didnt know that it is case sensitive.

View solution in original post

15 REPLIES 15

avatar
Super Collaborator

Do you kerberos your cluster using ambari ? There're serveral configuration you need to make it work.

avatar
Rising Star

Hey @jzhang, thanks for the interest. Yes, i have used ambari for kerberization and also i have set proxy users property for livy in hdfs

avatar
Super Collaborator

The session might be expired, Can you restart livy interpreter ? And if you still get the error, check the RM UI to find the yarn app log.

avatar
Rising Star

@jzhang, So i did restart the server and checked yarn logs but the job is not getting submitted, I cannot see any submitted or failed jobs.

avatar
Super Collaborator

Do you run it in yarn-cluster mode ? Set livy.spark.master as yarn-cluster in interpreter setting page

avatar
Rising Star

yes i have already set that to yarn-cluster mode

avatar

@Bhavin Tandel Have you setup livy.superusers property to your zeppelin principal in livy.conf?

avatar
Rising Star

yes, i have set it to zeppelin principal. But just for heads up i have not configured ldap for MIT Kerberos but i am using ldap with zeppelin.

avatar

@Bhavin Tandel please ensure you have all of the following setup

1) when you try to login using ldap - the user you are getting logged in has corresponding data directory on HDFS (/user/xyz) ? Also, the unix user should be present on the cluster

2) livy.superusers = zeppelin-clustername in livy.conf (which you already have done)

3) Along with for 'livy' user, also please setup proxy permissions for both 'zeppelin' and 'zeppelin-clustername' in HDFS core-site.xml

hadoop.proxyuser.zeppelin.hosts = *
hadoop.proxyuser.zeppelin.hosts = *
hadoop.proxyuser.zeppelin-clustername.hosts = *
hadoop.proxyuser.zeppelin-clustername.hosts = *