Support Questions

Find answers, ask questions, and share your expertise
Announcements
Check out our newest addition to the community, the Cloudera Data Analytics (CDA) group hub.

Zeppelin livy not working with connection refused error

Contributor

We have a kerberized cluster with AD integrated and I am getting this error.

java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:589) at org.apache.thrift.transport.TSocket.open(TSocket.java:182) at org.apache.zeppelin.interpreter.remote.ClientFactory.create(ClientFactory.java:51) at org.apache.zeppelin.interpreter.remote.ClientFactory.create(ClientFactory.java:37) at org.apache.commons.pool2.BasePooledObjectFactory.makeObject(BasePooledObjectFactory.java:60) at org.apache.commons.pool2.impl.GenericObjectPool.create(GenericObjectPool.java:861) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:435) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:363) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterProcess.getClient(RemoteInterpreterProcess.java:189) at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.init(RemoteInterpreter.java:173) at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.getFormType(RemoteInterpreter.java:338) at org.apache.zeppelin.interpreter.LazyOpenInterpreter.getFormType(LazyOpenInterpreter.java:105) at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:262) at org.apache.zeppelin.scheduler.Job.run(Job.java:176) at org.apache.zeppelin.scheduler.RemoteScheduler$JobRunner.run(RemoteScheduler.java:328) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)

Also I have tried all other solutions present on this community

8 REPLIES 8

Guru

@Bhavin Tandel, are you using HDP cluster? Can you please check if you have followed steps mentioned in below article?

https://community.hortonworks.com/articles/80059/how-to-configure-zeppelin-livy-interpreter-for-sec....

Contributor

Hey @yvora yup, I did follow that but error still persist.

Guru

@Bhavin Tandel, when are you hitting this errors? can you please explain the steps which lead to this error?

Contributor
@yvora

Sorry for late reply, i am again getting the same issue. It is just sc.version

@Bhavin Tandel What exception do you see in livy server logs ?

Contributor

Actually the home folder for zeppelin-principal was not present and after doing the need. We tried `livy.pyspark` and it asked us to push hive-site.xml file to spark-client/conf folder on all datanodes.

And then it worked but then now again it stopped

with the following error

Error running rest call; nested exception is org.springframework.web.client.HttpClientErrorException: 413 Request Entity Too Large

Super Collaborator
@Bhavin Tandel

Add below two properties to livy-conf from ambari and see if it fix 413 error

livy.http.request.header.size and livy.http.response.header.size to 131072 in livy-conf

Contributor

well i couldn't figure out the issue with the server but it was up and running the next day without the need to do any change.

Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.