- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
PHD cluster nodemanager memory issues
- Labels:
-
Apache YARN
Created ‎04-05-2016 01:55 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I want to run hdp memory script on PHD cluster, aside from directory naming convention, what else can be different? Pivotal has a single node sandbox and I can easily test it but it's an urgent issue and I do not have access to that.
Created ‎04-05-2016 04:26 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Artem, PHD-3.0 is equivalent to HDP-2.2.4, so a script referring to correct paths should work. What do you mean by "hdp memory script" on Yarn?
Edit: phd-configuration-utils.py exists on PHD-3.0, here is a link, scroll down to "PHD Utility Script". Example
python phd-configuration-utils.py -c 16 -m 64 -d 4 -k True
where "-c" is the number of cores, "-m" is RAM on worker nodes, "-d" is number of disks, and final Boolean is HBase yes/no.
Created ‎04-05-2016 04:26 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Artem, PHD-3.0 is equivalent to HDP-2.2.4, so a script referring to correct paths should work. What do you mean by "hdp memory script" on Yarn?
Edit: phd-configuration-utils.py exists on PHD-3.0, here is a link, scroll down to "PHD Utility Script". Example
python phd-configuration-utils.py -c 16 -m 64 -d 4 -k True
where "-c" is the number of cores, "-m" is RAM on worker nodes, "-d" is number of disks, and final Boolean is HBase yes/no.
