Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2093 | 06-15-2020 05:23 AM | |
| 17434 | 01-30-2020 08:04 PM | |
| 2254 | 07-07-2019 09:06 PM | |
| 8716 | 01-27-2018 10:17 PM | |
| 4913 | 12-31-2017 10:12 PM |
09-04-2018
06:16 AM
@Jay do you need other info?
... View more
09-04-2018
05:43 AM
@Jay another info from my datanode about the disk use
datanode1
47% disk1
64% disk2
48% disk3
49% disk4
datanode2
44%
53%
61%
44%
datanode3
55%
46%
45%
91%
datanode4
63%
45%
49%
46%
... View more
09-04-2018
05:29 AM
@jay , we got the following results su - hdfs -c " hdfs dfsadmin -report | grep 'DFS Used%' "
DFS Used%: 87.38%
DFS Used%: 88.48%
DFS Used%: 87.00%
DFS Used%: 84.70%
DFS Used%: 87.93%
... View more
09-04-2018
05:25 AM
@Jay , I want to note that we have 4 datenode machines ( 4 workers machines ) and each worker have 4 disks with 20G
... View more
09-04-2018
04:44 AM
@Jay , this is what we got: ( so this is like 88% used ) , and regarding my question , how it can be 88% when disk are ~50%
... View more
09-03-2018
05:05 PM
hi all we have ambari cluster version 2.6.1 & HDP version 2.6.4 from the dashboard we can see that HDFS DISK Usage is almost 90% but all data-node disk are around 90% so why HDFS show 90% , while datanode disk are only 50% /dev/sdc 20G 11G 8.7G 56% /data/sdc
/dev/sde 20G 11G 8.7G 56% /data/sde
/dev/sdd 20G 11G 9.0G 55% /data/sdd
/dev/sdb 20G 8.9G 11G 46% /data/sdb is it problem of fine-tune ? or else we also performed re-balance from the ambari GUI but this isn't help
... View more
Labels:
09-03-2018
04:06 PM
@Jonathan Sneep thank you so much
... View more
09-03-2018
02:28 PM
dear hortonworks colleges in the past before 3 month , I used the script as described in the article https://community.hortonworks.com/articles/16846/how-to-identify-what-is-consuming-space-in-hdfs.html but today I see that the script in the article change and and cant be run I guess because some edit in the post that change also the script can someone help to recover the script as was before 3 month ?
... View more
Labels:
09-03-2018
10:34 AM
@Jay, as you know we run the API that restart all req services, and what we mean is that we want to capture only
the last status from restart action , ( so the req ID should be only the last )
, for example , if now the last req id is 187 ,
then on the next restart it will be 188 , so we want to capture only the status
of req id - 188 ( fail / ok ) but as I mentioned my syntax is give this curl -sH "X-Requested-By: ambari" -u admin:admin -i http://$SERVER:8080/api/v1/clusters/$cluster_name/requests?fields=Requests/request_status | awk '/request_status/' | tail -1 | egrep -iq "FAILED|ABORTED"
[[ $? -eq 0 ]] && echo fail || echo ok
... View more
09-03-2018
09:42 AM
@Jay see also what are the real latest restart status "href" : "http://master:8080/api/v1/clusters/HDP/requests/186",
"Requests" : {
"cluster_name" : "HDP",
"id" : 186,
"request_context" : "Restart all components for ZooKeeper",
"request_status" : "ABORTED"
}
},
{
"href" : "http://master:8080/api/v1/clusters/HDP/requests/187",
"Requests" : {
"cluster_name" : "HDP",
"id" : 187,
"request_context" : "Restart all components for Kafka",
"request_status" : "COMPLETED"
}
},
{
"href" : "http://master:8080/api/v1/clusters/HDP/requests/188",
"Requests" : {
"cluster_name" : "HDP",
"id" : 188,
"request_context" : "Restart all components for Kafka",
"request_status" : "COMPLETED"
}
}
]
... View more