Member since
08-13-2019
84
Posts
233
Kudos Received
15
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2257 | 02-28-2018 09:27 PM | |
3339 | 01-25-2018 09:44 PM | |
6532 | 09-21-2017 08:17 PM | |
3733 | 09-11-2017 05:21 PM | |
3351 | 07-13-2017 04:56 PM |
02-16-2017
12:04 AM
1 Kudo
@Ajay
Can you please make sure that the [main] section in shiro.ini file of zeppelin has following properties configured? sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager securityManager.sessionManager = $sessionManager securityManager.sessionManager.globalSessionTimeout = 86400000 shiro.loginUrl = /api/login If not, please configure it and restart the zeppelin service.
... View more
02-15-2017
11:52 PM
1 Kudo
@Namit Maheshwari Thanks for a detailed explanation 🙂
... View more
02-15-2017
11:33 PM
4 Kudos
Following are the encryption zones in hdfs sudo su --c "hdfs crypto -listZones" hdfs /user/test_user key1 Create a directory in non encrypted zone as test_user hdfs dfs -mkdir /tmp/dir3/example Try to delete the directory created again as test_user hdfs dfs -rm -r /tmp/dir3/example Failed to move to trash: /tmp/dir3/example can't be moved into an encryption zone Any help would be appreciated. Thanks
... View more
Labels:
02-09-2017
02:04 AM
1 Kudo
@yvora Thanks. Looks like that was an issue
... View more
02-09-2017
01:24 AM
4 Kudos
@yvora Its because %livy.sql interpreter runs in the yarn-cluster mode whereas %sql interpreter runs in the yarn-client mode. Hence %sql can find the local file on the client-machine whereas %livy.sql wont be able to find. Try putting file in HDFS and use LOAD DATA INPATH rather than LOAD DATA LOCAL INPATH. It should work
... View more
02-09-2017
01:20 AM
3 Kudos
I’m trying to run a pyspark virtual env example with conda. However, the application is failing with below error. Any pointers on this issue? Traceback (most recent call last): File "/usr/bin/conda", line 11, in <module> load_entry_point('conda==4.2.7', 'console_scripts', 'conda')() File "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line 561, in load_entry_point return get_distribution(dist).load_entry_point(group, name) File "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2631, in load_entry_point return ep.load() File "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2291, in load return self.resolve() File "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2297, in resolve module = __import__(self.module_name, fromlist=['__name__'], level=0) File "/usr/lib/python2.7/site-packages/conda/cli/__init__.py", line 8, in <module> from .main import main # NOQA File "/usr/lib/python2.7/site-packages/conda/cli/main.py", line 46, in <module> from ..base.context import context File "/usr/lib/python2.7/site-packages/conda/base/context.py", line 18, in <module> from ..common.configuration import (Configuration, MapParameter, PrimitiveParameter, File "/usr/lib/python2.7/site-packages/conda/common/configuration.py", line 40, in <module> from ruamel.yaml.comments import CommentedSeq, CommentedMap # pragma: no cover ImportError: No module named ruamel.yaml.comments
... View more
Labels:
- Labels:
-
Apache Spark
12-15-2016
07:40 PM
1 Kudo
@Sebastian
Please post which HDP version are you on? Also by any chance, is your cluster wire encrypted? Also please post your interpreter settings snapshot as mentioned in one of the earlier comments
... View more
12-14-2016
06:57 PM
1 Kudo
@Bhavin Tandel good to know 🙂 Thanks
... View more
12-09-2016
09:58 PM
1 Kudo
@Bhavin Tandel Can you please post what error you are getting in livy server logs ?
... View more
12-08-2016
06:42 PM
1 Kudo
@Bhavin Tandel please ensure you have all of the following setup 1) when you try to login using ldap - the user you are getting logged in has corresponding data directory on HDFS (/user/xyz) ? Also, the unix user should be present on the cluster 2) livy.superusers = zeppelin-clustername in livy.conf (which you already have done) 3) Along with for 'livy' user, also please setup proxy permissions for both 'zeppelin' and 'zeppelin-clustername' in HDFS core-site.xml hadoop.proxyuser.zeppelin.hosts = *
hadoop.proxyuser.zeppelin.hosts = *
hadoop.proxyuser.zeppelin-clustername.hosts = *
hadoop.proxyuser.zeppelin-clustername.hosts = *
... View more
- « Previous
- Next »