Member since
01-26-2017
20
Posts
1
Kudos Received
0
Solutions
05-31-2017
08:33 AM
OK, yum says that 0.12.0.2.6.0.3-8 is installed but Ambari says that it is 0.11.0.2.6. I just stopped trusting what Ambari says.
... View more
05-31-2017
06:48 AM
I checked HDP 2.6 and I see that it is running knox 0.11 which is affected by the WEBHDFS bug. It has been updated for HDP 2.5 but I haven't seen anything for HDP 2.6. Was knox patched or is an upgrade for HDP 2.6 in the works?
... View more
Labels:
05-03-2017
06:18 AM
There were a few problems but no errors. The problem isn't that the resources were missing, they corresponded to the 2 zeppelin views that we created and so I expected that they would be removed. The problem was that all references to those views were not removed and, more importantly, ambari could not recover from their absence. My guess is that everyone knew that the zeppelin view was useless and so it's removal was never tested.
... View more
04-25-2017
10:15 AM
The problem appears to have been the zeppelin view. We had 2 views instantiated and 2 users who had access to those views. Those users couldn't be accessed, not even deleted and broke the users admin view. Looking at the log files I found that resource id's 58 and 153 could not be found, these corresponded to the 2 zeppelin views. I found them in the adminprivileges table and deleted them. I am not convinced that all traces are now removed but at least we can log into Ambari. Any comments?
... View more
04-25-2017
05:51 AM
I updated ambari as part of an upgrade to HDP 2.6 and after that some users are unable to login due to: PermissionHelper:78 - Error occurred when cluster or view is searched based on resource id
java.lang.NullPointerException at org.apache.ambari.server.security.authorization.PermissionHelper.getPermissionLabels(PermissionHelper.java:74) As far as I can tell the problem is that there are view(s) that n longer exist that are a part of the profile of the users that cannot login. Any suggestions?
... View more
Labels:
- Labels:
-
Apache Ambari
04-12-2017
05:32 AM
/gateway/default/static/... shouldn't exist, knox rewrites the /static/... from the namenode to /gateway/default/hdfs/static/... on the way to you and does the opposite transform on the way to the namenode.
... View more
04-07-2017
05:31 AM
1. Direct access works 2. View access works mostly, e.g. assets are fetched correctly. Only 2 uri's get 500. 3. I'll file a bug report
... View more
04-05-2017
10:56 AM
I am trying to get smartsense view working but it fails because /api/v1/views/HORTONWORKS_SMARTSENSE/versions/1.3.0.0-22/instances/SmartSense/resources/hst/context and /api/v1/views/HORTONWORKS_SMARTSENSE/versions/1.3.0.0-22/instances/SmartSense/resources/hst/checkconfig return 500 errors. Direct access to activity-analyzer and activity-explorer work fine.
... View more
Labels:
- Labels:
-
Apache Ambari
-
Hortonworks SmartSense
04-04-2017
06:55 AM
I created a rule in ranger that allowed select access to @null and default. It seems like a hack but it also seems to work.
... View more
02-22-2017
06:45 AM
This is an expect script not a shell script. Your shell does not understand expect commands.
... View more
02-15-2017
07:43 AM
Thanks, that's what I was afraid of. I noticed that there is a directory for every notebook with a note.json in it. As I have changed the configuration of my livy interpreter I need to copy /usr/hdp/current/zeppelin-server/conf/interpreter.json as well.
... View more
02-14-2017
07:25 AM
I need to move my zeppelin server but all that I can find in ambari is removing the service. That isn't a problem but it took a while to get zeppelin and livy interpreter set up and I don't wan't to lose my configuration. The interpreter settings seem to only be stored in a file but I presume that the zeppelin server settings are stored in ambari's database. Has anyone moved zeppelin and if so how?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Zeppelin
02-08-2017
12:20 PM
1 Kudo
I now have a working livy running, at least sc.version works After trying everything I could find with livy 0.2.0 (the version in 2.5.0) I decided to give 0.3.0 a try. I believe that the problem is caused by a bug in spark 1.6.2 when connecting to the metadata store. After compiling livy 0.3.0 with hadoop 2.7.3 and spark 2.0.0, and installing it beside 0.2.0 I had problems creating credentials for the HTTP principal. I solved that by using the hadoop jars from livy 0.2.0 instead of those from the build.
... View more
02-04-2017
03:02 PM
The problem isn't in zeppelin, it is in livy. Check livy.out you may see a timeout connecting to hive.
... View more
01-26-2017
01:57 PM
I don't really have an answer but I do have some more information. I see that zeppelin contacts livy, authenticates successfully and that livy replies with a: Set-Cookie: hadoop.auth="u=zeppelin..."; HttpOnly However I don't ever see that cookie returned. As far as I can see the livy interpreter should send the cookie on every call to livy. It uses org.springframework.web.client.RestTemplate to handle communication between itself and the livy server and I can see that that framework can handle cookies but I also see that the cookie is missing.
... View more
01-26-2017
12:29 PM
The tokens aren't passed. zeppelin authenticates itself with livy and as it is a superuser (livy.superusers) livy takes the proxyUser sent by zeppelin and becomes that user.
... View more
01-26-2017
07:56 AM
I believe that the problem is that localhost is not a valid host for kerberos. I changed my livy interpreter settings to point out the actual host.
... View more