Member since
09-18-2015
3274
Posts
1159
Kudos Received
426
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2573 | 11-01-2016 05:43 PM | |
| 8530 | 11-01-2016 05:36 PM | |
| 4877 | 07-01-2016 03:20 PM | |
| 8197 | 05-25-2016 11:36 AM | |
| 4346 | 05-24-2016 05:27 PM |
02-06-2016
08:46 PM
More information https://code.facebook.com/posts/938595492830104/osquery-introducing-query-packs/
... View more
02-06-2016
11:30 AM
@Gerd Koenig Nice! Please pick one best answer and accept it as best answer so that we all can go home 😛
... View more
03-12-2017
01:38 PM
Thank you! It worked for me.
... View more
02-05-2016
10:46 PM
1 Kudo
I think there is a misunderstanding in what yarn does. It doesn't care at all how much memory is available on the Linux machines. Or about buffers or caches It only cares about the settings in the yarn configuration. You can check them in Ambari.It is your responsibility to set them correctly so they fit to the system. You can find on the yarn page of ambari: - The total amount of RAM available to yarn on any one datanode. This is estimated by ambari during the installation but in the end your responsibility. - The min size of a container. ( this is also the common divider of container sizes ) - the max size of a container ( normally yarn max is a good idea ) So lets assume you have a 3 node cluster with 32GB of RAM on each and yarn memory has been set to 24GB ( leaving 8 to OS plus HDFS ) Lets also assume your min container size is 1GB. This gives you 24GB * 3 = 72GB in total for yarn and at most 72 containers. A couple important things: - If you set your map settings to 1.5GB you have at most 36 containers since yarn only gives out slots in multiples of the minimum ( i.e. 2GB, 3GB 4GB, ... ) This is a common problem. So always set your container sizes as multiple of the min. -If you have only 16GB on the nodes and you set the yarn memory to 32GB, yarn will happily bring your system into outofmemory. It is your responsibility to configure it correctly so it uses the available RAM but not more What yarn does is to shoot down any task that uses more than its requested amount of RAM and to schedule tasks so they are running locally to data etc. pp.
... View more
03-29-2016
07:09 PM
@Rich Raposa - Raised a new quest with subject "HDPCD Horton certification. #pig task"
... View more
02-05-2016
01:10 PM
@ARUNKUMAR RAMASAMY
Add more on this https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.2/bk_data_governance/content/section_mirroring_data_falcon.html
... View more
02-11-2016
12:23 AM
@Geoffrey Shelton Okot Human error caused this issue. 🙂
... View more
01-05-2017
02:21 AM
Hi Prakash, I met just the same problem, did you solve it ?
... View more
03-20-2017
06:21 PM
Hi @Robin Dong Please check if the ambari-server has the ambari.repo file. That was the issue for me. Someone had moved it
... View more
02-04-2016
09:17 PM
@Robin Dong Did this answer fix the issue?
... View more