Member since
07-07-2017
15
Posts
0
Kudos Received
0
Solutions
04-03-2018
10:37 AM
@Akash Mendiratta The default "sync.query.timeout" is (60*1000) milliseconds .. means 1 minute. So please try increasing the value and see if it works for you. https://github.com/apache/ambari/blob/release-2.6.1/contrib/views/hive20/src/main/java/org/apache/ambari/view/hive20/client/DDLDelegatorImpl.java#L235-L237 https://github.com/apache/ambari/blob/release-2.6.1/contrib/views/hive-next/src/main/java/org/apache/ambari/view/hive2/client/DDLDelegatorImpl.java#L185-L187 In "/etc/ambari-server/conf/ambari.properties" file please add the following property to increase the sync timeout as bit.
Syntax:
views.ambari.hive.<HIVE2_INSTANCE_NAME>.sync.query.timeout=120000
Example:
views.ambari.hive.hive_instance_1_50.sync.query.timeout=120000 Please replace the "<HIVE2_INSTANCE_NAME>" with your Hive View instance name. After adding the above property please restart the ambari server. .
... View more
11-22-2017
09:47 AM
@Akash Mendiratta I see an improvement request is already raised for similar requirement as part of JIRA recently using the property "zeppelin.spark.uiWebUrl" : https://issues.apache.org/jira/browse/ZEPPELIN-2949 . Please refer to the above JIRA to get more details on this. However unfortunately the latest release of HDP (2.6.3 includes the Zeppelin version 0.7.3, but the mentioned JIRA improvement is applicable for Zeppelin version 0.8.0 .
... View more
11-22-2017
07:40 AM
Yes. We have made those changes. Whenever I enable user impersonation in the interpreter for hive, and submits a query. The query gets submitted as anonymous user. For SH and SPARK interpreter it works
... View more
08-23-2017
07:48 PM
is this useful ? any questions welcome.
... View more
09-22-2017
12:00 PM
i followed the above steps and it throws the following error but my cluster is kerberos disabled and Apache Ranger Authorization is enabled . java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at org.apache.thrift.transport.TSocket.open(TSocket.java:182)
at org.apache.zeppelin.interpreter.remote.ClientFactory.create(ClientFactory.java:51)
at org.apache.zeppelin.interpreter.remote.ClientFactory.create(ClientFactory.java:37)
at org.apache.commons.pool2.BasePooledObjectFactory.makeObject(BasePooledObjectFactory.java:60)
at org.apache.commons.pool2.impl.GenericObjectPool.create(GenericObjectPool.java:861)
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:435)
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:363)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterProcess.getClient(RemoteInterpreterProcess.java:90)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.init(RemoteInterpreter.java:211)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.getFormType(RemoteInterpreter.java:377)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.getFormType(LazyOpenInterpreter.java:105)
at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:387)
at org.apache.zeppelin.scheduler.Job.run(Job.java:175)
at org.apache.zeppelin.scheduler.RemoteScheduler$JobRunner.run(RemoteScheduler.java:329)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
is their any different approach please let know to enable impersonate. Thank you
... View more
07-13-2017
08:26 AM
The configuration is same for both hive views on both the instances.
... View more
07-08-2017
12:33 PM
1 Kudo
It depends on how you use sparkR. There are 2 scenarios that require you install R in all the nodes. * If you use R UDF, then you need to install R in all the nodes. Because that UDF will run in the executor side * If you want to convert R dataframe to Spark dataframe.
... View more