Member since
05-10-2016
26
Posts
10
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3931 | 01-09-2019 06:17 PM | |
2310 | 12-14-2018 07:49 PM | |
1334 | 02-24-2017 02:57 PM | |
6027 | 09-13-2016 04:52 PM |
01-09-2019
06:17 PM
2 Kudos
Are you accessing Knox via a load balancer? I've seen something similar where there was a load balancer in front of Knox that didn't support websockets. The documentation you are following might be incorrect too. The role name I think should be ZEPPELINWS based on https://knox.apache.org/books/knox-1-2-0/user-guide.html#Zeppelin+UI
... View more
12-21-2018
07:10 PM
Potentially but no guarantees. You would lose those changes on upgrades and would be up to you to keep them in sync. I know it is something that will not be supported but you seem to acknowledge that in the question with "Though KNOX with YARNUI is not officially supported"
... View more
12-14-2018
07:49 PM
This is most likely fixed in Apache Knox 1.2.0 (KNOX-1207) and should be fixed in HDP 3.1 just released.
... View more
11-13-2018
10:47 PM
KNOX-1098 is the support for adding proxyUser when it is not there. This hasn't been merged yet.
... View more
03-18-2018
12:29 AM
1 Kudo
The amount of memory needed by Ambari Infra Solr by default is too high due to poor configuration choices. I wrote about what simple changes can be made to make Solr significantly more performant on less heap. We have Solr with less than 4GB of heap holding billions of documents. https://risdenk.github.io/2017/12/18/ambari-infra-solr-ranger.html
... View more
02-08-2018
02:35 AM
@Devendra Singh Here are all the details in a blog: https://risdenk.github.io/2018/02/07/hdf-ambari-mpack-upgrades.html
... View more
02-08-2018
02:34 AM
@Raffaele S Here are all the details in a blog: https://risdenk.github.io/2018/02/07/hdf-ambari-mpack-upgrades.html
... View more
01-19-2018
02:01 AM
@Raffaele S - We found a few more things that were broken which we were able to fix. I am working on writing up some more steps for this. We were able to get everything to work. I agree with you that the documentation isn't clear that mpack must go before Ambari upgrade. I'll update this question with more info hopefully soon.
... View more
01-13-2018
08:37 PM
See my answer about fixing stack_root. "I tried upgrading the management pack after, but didn't help." It looks like you HAVE to do mpack first then Ambari upgrade going from Ambari 2.5.x to Ambari 2.6.x. This will automatically update the stack_root.
... View more
01-13-2018
06:48 PM
The issue seems to be Ambari doesn't like the JSON coming back from Stack.get_stack_root(). The format looks like it changed between Ambari 2.5.1 and Ambari 2.5.2 with AMBARI-21430. The commit looks like this: f33a250c0 This is what the new Ambari 2.6 cluster has for stack_root. "stack_root" : "{\"HDF\":\"/usr/hdf\"}", This is what our upgraded Ambari 2.6 cluster has for stack_root. "stack_root" : "/usr/hdf", This can easily be updated with configs.py since this is just cluster-env -> stack_root. An example of doing this: /var/lib/ambari-server/resources/scripts/configs.py -l $(hostname -f) -t 8081 -s https -n CLUSTER_NAME -u USERNAME -p PASSWORD -a set -c cluster-env -k stack_root -v '{"HDF":"/usr/hdf"}' PS: It also looks like this requires that you have the 3.0.2 mpack installed (even if you have 3.0.1 HDF installed) since that is compatible with Ambari 2.6.x (see matrix here). Without the 3.0.2 mpack, we were getting different errors (like the ones in the original question). To install 3.0.2 mpack (make sure purge flag is set): sudo ambari-server install-mpack --mpack=https://s3.amazonaws.com/public-repo-1.hortonworks.com/HDF/centos7/3.x/updates/3.0.2.0/tars/hdf_ambari_mp/hdf-ambari-mpack-3.0.2.0-76.tar.gz --purge --verbose --force
... View more