Member since
10-25-2016
15
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4783 | 12-21-2016 09:03 AM |
01-24-2017
01:37 PM
@jzhang Thanks for the first answer, It makes sense to increase configured memory size from 512M to 4 GB, I would give a try and monitor the response time. Sorry for not being too clear on 2nd question, Well, I meant to ask, Should we need to take care of tuning jdbc-interpreter configuration if users are executing HIVE queries on Zeppelin notebook to make it fast? If not mistaken, Cluster is setup with Hive using hiveserver2 thrift. So, should we be upgrading this with spark thrift server in order to utilize the sparkContext configuration? ( Sorry for bothering you with such silly questions as I am new to this so keen to know about it.)
... View more
01-24-2017
12:54 PM
Thank you for the help @jzhang So, what I believe, Zeppelin is configured using yarn-client for spark interpreter, and spark home is set to /usr/hdp/current/spark directory in same host(master). spark interpreter is already in shared mode, although spark.executor.memory is set to 512MB(Default). So what do you think is the actual problem here? And what if users are executing HIVE queries using JDBC interpreter, does it make any difference or will be okay if sparkContext and executors are correctly set?
zeppelin-1.pngzeppelin-2.pngzeppelin-3.png
I am sharing zeppelin current configuration for reference.
... View more
01-24-2017
06:17 AM
Accessing Zeppelin Notebook by 15 users at the same time making it damn slow to execute their queries. So how can we make Zeppelin more scale-able to handle various queries on nearly 50GB data from multiple users(say 30-50) at the same time without slowing the response time? I understand Zeppelin uses memory for its execution, but we have 6 nodes of cluster with 27GB RAM, 8 Cores and 521 GB disk usage on each. And for reference Zeppelin is configured on one of the MASTER NODE. Looking for some viable suggestions. Many thanks in advance.
... View more
Labels:
- Labels:
-
Apache Zeppelin
12-21-2016
09:03 AM
Hi Ward, Thank you so much for your reply. I got the way to solution after investigating region-server log. Region server was actually trying to open some old service which was already removed long ago but due to some reasons "that service related directories and metadata was left under /app/hbase/data/service_name folder" which was responsible for main error. As soon as I have deleted /service_name/* from /app/hbase/data directory problem got solved. Of course I restarted HBASE service from Ambari. Thanks
... View more
12-20-2016
02:49 PM
hbase-alert.pnghbase-service.png
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
-
Apache HBase