Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 930 | 06-04-2025 11:36 PM | |
| 1538 | 03-23-2025 05:23 AM | |
| 762 | 03-17-2025 10:18 AM | |
| 2754 | 03-05-2025 01:34 PM | |
| 1818 | 03-03-2025 01:09 PM |
01-29-2019
02:49 PM
Go to ResourceManager UI on Ambari. Click
nodes link on the left side of the window. It should show all Node
Managers and the reason for it being listed as unhealthy. Mostly found reasons are regarding disk space threshold
reached. In that case needs to consider following parameters
Parameters
Default value
Description
yarn.nodemanager.disk-health-checker.min-healthy-disks
0.25
The minimum fraction of number of disks to be healthy for the
node manager to launch new containers. This correspond to both
yarn.nodemanager.local-dirs and yarn.nodemanager.log-dirs. i.e. If there are
less number of healthy local-dirs (or log-dirs) available, then new
containers will not be launched on this node.
yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage
90.0
The maximum percentage of disk space utilization allowed after
which a disk is marked as bad. Values can range from 0.0 to 100.0. If the
value is greater than or equal to 100, the nodemanager will check for full
disk. This applies to yarn.nodemanager.local-dirs and
yarn.nodemanager.log-dirs.
yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb
0
The minimum space that must be available on a disk for it to
be used. This applies to yarn.nodemanager.local-dirs and
yarn.nodemanager.log-dirs.
In the final step, if above steps do not reveal the actual
problem , needs to check log , location path : /var/log/hadoop-yarn/yarn.
... View more
01-02-2019
07:19 AM
hi Geoffrey , I post anew thread - https://community.hortonworks.com/questions/231177/metrics-failed-on-orgapachehadoophbasezookeeperzoo.html , but I see this post not appears in the hortonworks questions ,could you help me to understand why ?
... View more
04-17-2019
06:31 AM
@Alexander Lebedev , Are you still facing login issues with the Sandbox? This looks like a redirection issue with your localhost and would most probably be linked to your /etc/hosts configuration. Let me know if you are still stuck with this , would be happy to help.
... View more
12-18-2018
05:42 AM
Thanks @Geoffrey Shelton Okot for researching on this. I have resolved this issue by following instructions given in this link: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_command-line-installation/content/configuring-atlas-sqoop-hook.html
... View more
12-17-2018
05:53 AM
thanks bro
... View more
01-01-2019
05:13 PM
@max mouse There isn’t a one-and-only tool that can do everything equally well and address all of your requirements. Combining tools that do different things in better ways allows for a buildup in functionality and increased flexibility in handling a larger set of scenarios. Depending on your needs, both NiFi and Flume can act as Kafka producers and/or consumers. HTH
... View more
12-21-2018
02:30 PM
All, Thanks for your response. I found the root cause of the issue. Ambari was using its master's key in KDC admin credentials that is why it was giving "Missing KDC administrator credentials. Please enter admin principal and password". So I have removed that crendential file (PFA for this) and issue has been solved. For others, you may need to keep ambari master key and KDC admin creds same, because that file is required at the time of ambari-server restart (if you have configured jceks). PFA, kerberos-admin-creds-issue-solved.png
... View more
10-29-2018
02:29 PM
Hi Geoffrey - I reinstalled Ambari and HDFS and that fixed the Issue - thank you
... View more
10-19-2018
05:54 PM
Your "Database type" property is set to "Generic", try setting it to Oracle (for Oracle < 12) or Oracle 12+.
... View more