Member since
01-23-2017
114
Posts
19
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2216 | 03-26-2018 04:53 AM | |
27641 | 12-01-2017 07:15 AM | |
913 | 11-28-2016 11:30 AM | |
1580 | 10-25-2016 11:26 AM |
10-20-2017
01:25 PM
We are on HDP 2.6.2 and Zeppelin 0.7.2, while running the zeppelin Notebook with Spark2 Interpreter --> Per User -> Isolated mode it is keep failing with the below error: ERROR [2017-10-20 12:28:46,619] ({pool-2-thread-5} RemoteScheduler.java[getStatus]:256) - Can't get status information
org.apache.zeppelin.interpreter.InterpreterException: org.apache.thrift.transport.TTransportException: java.net.ConnectException: Connection refused (Connection refused)
at org.apache.zeppelin.interpreter.remote.ClientFactory.create(ClientFactory.java:53)
at org.apache.zeppelin.interpreter.remote.ClientFactory.create(ClientFactory.java:37)
at org.apache.commons.pool2.BasePooledObjectFactory.makeObject(BasePooledObjectFactory.java:60)
at org.apache.commons.pool2.impl.GenericObjectPool.create(GenericObjectPool.java:861)
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:435)
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:363)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterProcess.getClient(RemoteInterpreterProcess.java:92)
at org.apache.zeppelin.scheduler.RemoteScheduler$JobStatusPoller.getStatus(RemoteScheduler.java:254)
at org.apache.zeppelin.scheduler.RemoteScheduler$JobRunner.run(RemoteScheduler.java:342)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TTransportException: java.net.ConnectException: Connection refused (Connection refused)
at org.apache.thrift.transport.TSocket.open(TSocket.java:187)
at org.apache.zeppelin.interpreter.remote.ClientFactory.create(ClientFactory.java:51)
... 15 more
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at org.apache.thrift.transport.TSocket.open(TSocket.java:182)
... 16 moreERROR [2017-10-20 12:28:46,619] ({pool-2-thread-5} NotebookServer.java[afterStatusChange]:2050) - Error
org.apache.zeppelin.interpreter.InterpreterException: org.apache.zeppelin.interpreter.InterpreterException: org.apache.thrift.transport.TTransportException: java.net.ConnectException: Connection refused (Connection refused)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.interpret(RemoteInterpreter.java:379)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:101)
at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:410)
at org.apache.zeppelin.scheduler.Job.run(Job.java:175)
at org.apache.zeppelin.scheduler.RemoteScheduler$JobRunner.run(RemoteScheduler.java:329)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.zeppelin.interpreter.InterpreterException: org.apache.thrift.transport.TTransportException: java.net.ConnectException: Connection refused (Connection refused)
at org.apache.zeppelin.interpreter.remote.ClientFactory.create(ClientFactory.java:53)
at org.apache.zeppelin.interpreter.remote.ClientFactory.create(ClientFactory.java:37)
at org.apache.commons.pool2.BasePooledObjectFactory.makeObject(BasePooledObjectFactory.java:60)
at org.apache.commons.pool2.impl.GenericObjectPool.create(GenericObjectPool.java:861)
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:435)
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:363)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterProcess.getClient(RemoteInterpreterProcess.java:92)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.interpret(RemoteInterpreter.java:377)
... 11 more
Caused by: org.apache.thrift.transport.TTransportException: java.net.ConnectException: Connection refused (Connection refused)
at org.apache.thrift.transport.TSocket.open(TSocket.java:187)
at org.apache.zeppelin.interpreter.remote.ClientFactory.create(ClientFactory.java:51)
... 18 more We have seen this but this didn't help us (we are yet to try re-installing), and also found the JIRA BUGs related to this: https://issues.apache.org/jira/browse/ZEPPELIN-1700 https://issues.apache.org/jira/browse/ZEPPELIN-1984 https://issues.apache.org/jira/browse/ZEPPELIN-2547 we wanted to know is there any workaround for this or any patch available for this. Thanks Venkat
... View more
Labels:
- Labels:
-
Apache Spark
-
Apache Zeppelin
08-23-2017
07:59 AM
1 Kudo
@manisha jain It looks like you are running the command with the root user for which it is checking for the home directory availability for this user (root) under HDFS. The error: Permission denied: user=root, access=WRITE, inode="/user":hdfs:supergroup:dr signifies that it it trying to access the /user directory with the access permission as WRITE for which your user doesn't have access, you can solve this in two ways: 1) run the command with the hdfs user (if you have the permissions) 2) you can get your user created (root in this case) with the required permissions and run the command.
... View more
08-23-2017
07:49 AM
@Anup Shirolkar can you please check the below: 1) You have he Sqoop client installed on all the Nodemanagers 2) Make sure you the MySql server connectivity is working from all the NodeManagers Thanks Venkat
... View more
08-18-2017
01:35 PM
@Kishore Kumar As given by @Jay SenSharma and @Geoffrey Shelton Okot the error: 18 Aug 2017 11:08:08,969 ERROR [ambari-client-thread-26] ViewRegistry:930 - Could not find the cluster identified by 2. 18 Aug 2017 11:08:08,970 ERROR [ambari-client-thread-26] ContainerResponse:419 - The RuntimeException could not be mapped to a response, re-throwing to the HTTP container org.apache.ambari.server.view.IllegalClusterException: Failed to get cluster information associated with this view instance doesn't look good, can you please follow the official documentation to create the new files view: https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.1.0/bk_ambari-views/content/configuring_your_cluster_for_files_view.html And also make sure you are able to reach the cluster from the Ambari host, and the forward and reserve lookup of the cluster (NameNodes) from the Ambari server is going through.
... View more
08-11-2017
07:50 AM
@Narasimha K can you please check from the Ambari Alerts if this is coming up as a Stale alert? it could have been hanging around, you can try to delete the alerts from the Ambari DB and see if this alert is coming up if not the issue is resolved and it could happen if there are any changes in the hostname's/ip's or it is not able to find the process that triggered this alert. please let me know how it gives after fixing it from the DB. Thanks Venkat
... View more
08-10-2017
07:04 PM
@Narasimha K from the resource manager can you please check if there is any old process of RM is hanging around? you can find it using : jps -l | grep -i resourcemanager if you see multiple processes running then you need to kill the old process that is hanging around. Thanks Venkat
... View more
08-10-2017
06:51 PM
@Thanuja Kularathna the error you have given shows: java.lang.RuntimeException: java.lang.OutOfMemoryError: unable to create new native thread it looks the region server is not having the enough memory to startup the service, you can check for both HBase Master and region servers heap space's from Ambari --> HBase --> Configs --> Settings you can start with as low as 2GB. You can check for the GC (Garbage Collector) logs for memory allocation failures.
... View more
08-04-2017
04:07 AM
@pv poreddythe connection string you are using from you screen shot is: jdbc:hive2//<servername>:10000 which is missing a colon (:) it is supposed to be jdbc:hive2://<servername>:10000 for more details you can refer to: https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients
... View more
08-02-2017
07:02 AM
@rich
as given by @Kuldeep Kulkarni and @Takahiko Saito it launches an YARN appllication when the default execution engine of HIVE is TEZ, and starts waiting for the resources if the Queue is fully occupied and no more resources available. @gvenkatesh if you want to continue with TEZ we can do the below to work around the issue (only workaround as it is related to resource limitation of the YARN) 1) Increase the MAX queue capacity 2) Increase the User limitfactor to be more than 1 3) we can reduce the minimum container size 512MB with that we will have an option to launch more containers (note: this needs to be considered based on cluster usage)
... View more
08-02-2017
06:45 AM
@Sanjib Behera if the cluster is Kerberos enabled then we need "kafka-console-consumer.sh --zookeeper abc00691239901.cde.com:6667 --topic test --from-beginning --security-protocol PLAINTEXTSASL" and for Storm spout you need to have jaas file to be configured.
... View more