Member since
01-27-2016
46
Posts
40
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1921 | 10-24-2016 05:39 PM | |
1535 | 03-30-2016 06:02 PM | |
2512 | 02-28-2016 04:37 PM | |
4838 | 02-07-2016 07:57 AM |
05-14-2018
04:53 AM
@Sagar Shimpi I have the same issue. It fails for HCAT. How to fix? I have followed the steps mentioned in your comment.
... View more
03-30-2016
06:02 PM
1 Kudo
As I am using vmware I found a straight forward approach (link below) without doing the virtuabox conversion mentioned in the link above: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1006371
... View more
12-04-2017
01:49 PM
Resolved my issue after modifying the JSON file and restarting zeppelin. Thanks for the help
... View more
02-28-2016
05:15 PM
@Rainer Geissendoerfer Please accept the answer to close the thread
... View more
02-28-2016
04:37 PM
1 Kudo
This can be closed ... I had 2 HiveServer2 running on different notes as part of a previously failed hive install ... and this totally confused ranger and falcon ... after removing hive and reinstalling it everything is fine now ... thanks for your support!
... View more
02-07-2016
11:46 AM
@Rainer Geissendoerfer Thats the easy fix and its because somewhere in the code , it may be looking for localhost:2181
... View more
02-05-2016
10:46 PM
1 Kudo
I think there is a misunderstanding in what yarn does. It doesn't care at all how much memory is available on the Linux machines. Or about buffers or caches It only cares about the settings in the yarn configuration. You can check them in Ambari.It is your responsibility to set them correctly so they fit to the system. You can find on the yarn page of ambari: - The total amount of RAM available to yarn on any one datanode. This is estimated by ambari during the installation but in the end your responsibility. - The min size of a container. ( this is also the common divider of container sizes ) - the max size of a container ( normally yarn max is a good idea ) So lets assume you have a 3 node cluster with 32GB of RAM on each and yarn memory has been set to 24GB ( leaving 8 to OS plus HDFS ) Lets also assume your min container size is 1GB. This gives you 24GB * 3 = 72GB in total for yarn and at most 72 containers. A couple important things: - If you set your map settings to 1.5GB you have at most 36 containers since yarn only gives out slots in multiples of the minimum ( i.e. 2GB, 3GB 4GB, ... ) This is a common problem. So always set your container sizes as multiple of the min. -If you have only 16GB on the nodes and you set the yarn memory to 32GB, yarn will happily bring your system into outofmemory. It is your responsibility to configure it correctly so it uses the available RAM but not more What yarn does is to shoot down any task that uses more than its requested amount of RAM and to schedule tasks so they are running locally to data etc. pp.
... View more
05-11-2017
03:49 PM
Thanks I had same issue after HDP2.6 upgrade. The install silently chnaged the seetings. 1- connect to Ambari 2- hdfs service > advanced config > Custom core-site and change this: hadoop.proxyuser.hive.groups = * hadoop.proxyuser.hive.hosts = * hadoop.proxyuser.hcat.groups = * hadoop.proxyuser.hcat.hosts = * This solved my issue as well
... View more
02-15-2016
01:46 AM
@Rainer Geissendoerfe
... View more
01-29-2016
01:54 AM
@Jonas Straub Happy Hadooping!! 🙂
... View more