Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2003 | 06-15-2020 05:23 AM | |
| 16505 | 01-30-2020 08:04 PM | |
| 2160 | 07-07-2019 09:06 PM | |
| 8374 | 01-27-2018 10:17 PM | |
| 4744 | 12-31-2017 10:12 PM |
01-21-2019
09:53 AM
@Michael Bronson There is a Type in your URL. The spelling of "FSNamesytem" (should be "FSNamesystem") one character 's' is missing in the word. So please try this: # curl -u admin:admin -H "X-Requested-By: ambari" -X GET "http://name2:8080/api/v1/clusters/clu45/host_components?HostRoles/component_name=NAMENODE&metrics/dfs/FSNamesystem/HAState=standby .
... View more
01-15-2019
11:51 AM
1 Kudo
@Michael Bronson Step1). You can get all the Hostnames where the DataNode is present using the following API call: # curl --user admin:admin -H 'X-Requested-By: ambari' -X GET "http://newhwx1.example.com:8080/api/v1/clusters/NewCluster/services/HDFS/components/DATANODE?fields=host_components/HostRoles/host_name" | grep host_name | awk '{print $NF}' | awk -F'"' '{print $2}'
. Step2). Once you have the list of the hosts where the Datanode is installed (using above API call) then you can use it to make the following API call using some shell script to replace the $HOST with the hostname. # curl --user admin:admin -H 'X-Requested-By: ambari' -X POST -d '{"RequestInfo":{"command":"RESTART","context":"Restart all components on $HOST","operation_level":{"level":"HOST","cluster_name":"NewCluster"}},"Requests/resource_filters":[{"service_name":"HDFS","component_name":"DATANODE","hosts":"$HOST"}, {"service_name":"YARN","component_name":"NODEMANAGER","hosts":"$HOST"}, {"service_name":"AMBARI_METRICS","component_name":"METRICS_MONITOR","hosts":"$HOST"} ]}' http://ambariserver.example.com:8080/api/v1/clusters/NewCluster/requests . NOTE: In the above call please make sure to replace the $HOST with the hostname one by one (using some shell script Loop iteration) which we retrieved from the previous API call. Also please replace the "NewCluster" with your own cluster name. Also please replace "ambariserver.example.com" with your ambari hostname and credentials accordingly.
... View more
01-21-2019
12:37 PM
@Jay , after we do the decommission on some datanode , do we need also to stop the components on that datanode ? and then replaced the disk , or maybe it is inoufgh to decomission without to stop the componet and then replaced the disk ?
... View more
01-14-2019
06:41 AM
@Geoffrey Shelton Okot , do you mean that we need to check the RAM memory on our data node machines ? , we have on each machine 256G memory and available is 198G , or maybe you want to check other thing?
... View more
01-07-2019
05:11 PM
@Jay any comment regarding my last notes ?
... View more
01-05-2019
06:58 PM
@Jay any update?
... View more
01-29-2019
02:49 PM
Go to ResourceManager UI on Ambari. Click
nodes link on the left side of the window. It should show all Node
Managers and the reason for it being listed as unhealthy. Mostly found reasons are regarding disk space threshold
reached. In that case needs to consider following parameters
Parameters
Default value
Description
yarn.nodemanager.disk-health-checker.min-healthy-disks
0.25
The minimum fraction of number of disks to be healthy for the
node manager to launch new containers. This correspond to both
yarn.nodemanager.local-dirs and yarn.nodemanager.log-dirs. i.e. If there are
less number of healthy local-dirs (or log-dirs) available, then new
containers will not be launched on this node.
yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage
90.0
The maximum percentage of disk space utilization allowed after
which a disk is marked as bad. Values can range from 0.0 to 100.0. If the
value is greater than or equal to 100, the nodemanager will check for full
disk. This applies to yarn.nodemanager.local-dirs and
yarn.nodemanager.log-dirs.
yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb
0
The minimum space that must be available on a disk for it to
be used. This applies to yarn.nodemanager.local-dirs and
yarn.nodemanager.log-dirs.
In the final step, if above steps do not reveal the actual
problem , needs to check log , location path : /var/log/hadoop-yarn/yarn.
... View more
01-02-2019
08:50 AM
I try also to delete this duplicate post but I cant , so I accepted your answer insted , hope this is ok
... View more
01-02-2019
10:48 AM
Dear Jay , metrics now is up aafter we set new value for hbase_master_heapsize = 1400
... View more
01-02-2019
07:19 AM
hi Geoffrey , I post anew thread - https://community.hortonworks.com/questions/231177/metrics-failed-on-orgapachehadoophbasezookeeperzoo.html , but I see this post not appears in the hortonworks questions ,could you help me to understand why ?
... View more