Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1964 | 06-15-2020 05:23 AM | |
| 16010 | 01-30-2020 08:04 PM | |
| 2105 | 07-07-2019 09:06 PM | |
| 8233 | 01-27-2018 10:17 PM | |
| 4663 | 12-31-2017 10:12 PM |
02-01-2018
08:23 AM
we have ambari cluster we haven't started using HDFS but we get service-level alert is triggered if the increase in storage capacity usage - why? please advice why? why capacity is grow in spite we not use HDFS? and what is the solotion for this?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
01-31-2018
05:39 PM
Do you mean that the parameters are already calculated during amabri instalaltion or we can use this toll to set new update parameters?
... View more
01-31-2018
03:30 PM
hi all python yarn-utils.py script is a nice tool that Determining HDP Memory Configuration Settings background - https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/bk_command-line-installation/content/determine-hdp-memory-config.html I want to know about other tools that can help us to perform fine tuning to the ambari cluster there are many other parameters that python yarn-utils.py isn't covered and need to set them according to the HW system for example - yarn.nodemanager.resource.cpu-vcores and many many others please help us to know if there are other tools that can help us to give the right values for many aparmters and by this way we can set them in the ambari cluster
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
01-31-2018
02:49 PM
as I read from the doc ,yarn.nodemanager.resource.cpu-vcores are by default ~80% of total vCPUs available on the machine so just to clear that in case we have ambari cluster with 3 master machine and 3 workers )( machines , and each worker have 8 core then the calculate will be 3X8=32 ? or maybe yarn.nodemanager.resource.cpu-vcores should be calculate per worker machine as: 1X8=8 ? or else?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
01-31-2018
01:39 PM
we have ambari cluster with 3 masters machine and 4 datanode machine we run the hdfs dfsadmin -report and we found Missing blocks: 4 how to know the reason for these missing blocks? second what is the workaround that we need to do regarding that? hdfs dfsadmin -report
Configured Capacity: 8226130288640 (7.48 TB)
Present Capacity: 8225526102776 (7.48 TB)
DFS Remaining: 8205621209848 (7.46 TB)
DFS Used: 19904892928 (18.54 GB)
DFS Used%: 0.24%
Under replicated blocks: 4
Blocks with corrupt replicas: 0
Missing blocks: 4
Missing blocks (with replication factor 1): 0
-------------------------------------------------
Live datanodes (4):
Name: 10.164.252.32:50010 (worker03.sys76.com)
Hostname: worker03.sys76.com
Decommission Status : Normal
Configured Capacity: 1170504683520 (1.06 TB)
DFS Used: 5715611648 (5.32 GB)
Non DFS Used: 0 (0 B)
DFS Remaining: 1164727208338 (1.06 TB)
DFS Used%: 0.49%
DFS Remaining%: 99.51%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 6
Last contact: Wed Jan 31 13:34:02 UTC 2018
Name: 10.164.252.33:50010 (worker04.sys76.com)
Hostname: worker04.sys76.com
Decommission Status : Normal
Configured Capacity: 2351875026944 (2.14 TB)
DFS Used: 4573270016 (4.26 GB)
Non DFS Used: 0 (0 B)
DFS Remaining: 2347124950656 (2.13 TB)
DFS Used%: 0.19%
DFS Remaining%: 99.80%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 6
Last contact: Wed Jan 31 13:34:02 UTC 2018
Name: 10.164.252.31:50010 (worker02.sys76.com)
Hostname: worker02.sys76.com
Decommission Status : Normal
Configured Capacity: 2351875026944 (2.14 TB)
DFS Used: 5077798912 (4.73 GB)
Non DFS Used: 0 (0 B)
DFS Remaining: 2346627408110 (2.13 TB)
DFS Used%: 0.22%
DFS Remaining%: 99.78%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 6
Last contact: Wed Jan 31 13:34:02 UTC 2018
Name: 10.164.252.30:50010 (worker01.sys76.com)
Hostname: worker01.sys76.com
Decommission Status : Normal
Configured Capacity: 2351875551232 (2.14 TB)
DFS Used: 4538212352 (4.23 GB)
Non DFS Used: 0 (0 B)
DFS Remaining: 2347141642744 (2.13 TB)
DFS Used%: 0.19%
DFS Remaining%: 99.80%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 8
Last contact: Wed Jan 31 13:34:02 UTC 2018
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
01-31-2018
12:56 PM
@Jay not see from the output something negetive , or maybe you want to add your opinion
/usr/jdk64/jdk1.8.0_112/bin/jmap -heap 26765
Attaching to process ID 26765, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 25.112-b15
using parallel threads in the new generation.
using thread-local object allocation.
Concurrent Mark-Sweep GC
Heap Configuration:
MinHeapFreeRatio = 40
MaxHeapFreeRatio = 70
MaxHeapSize = 1073741824 (1024.0MB)
NewSize = 209715200 (200.0MB)
MaxNewSize = 209715200 (200.0MB)
OldSize = 864026624 (824.0MB)
NewRatio = 2
SurvivorRatio = 8
MetaspaceSize = 21807104 (20.796875MB)
CompressedClassSpaceSize = 1073741824 (1024.0MB)
MaxMetaspaceSize = 17592186044415 MB
G1HeapRegionSize = 0 (0.0MB)
Heap Usage:
New Generation (Eden + 1 Survivor Space):
capacity = 188743680 (180.0MB)
used = 13146000 (12.537002563476562MB)
free = 175597680 (167.46299743652344MB)
6.9650014241536455% used
Eden Space:
capacity = 167772160 (160.0MB)
used = 7374968 (7.033317565917969MB)
free = 160397192 (152.96668243408203MB)
4.3958234786987305% used
From Space:
capacity = 20971520 (20.0MB)
used = 5771032 (5.503684997558594MB)
free = 15200488 (14.496315002441406MB)
27.51842498779297% used
To Space:
capacity = 20971520 (20.0MB)
used = 0 (0.0MB)
free = 20971520 (20.0MB)
0.0% used
concurrent mark-sweep generation:
capacity = 864026624 (824.0MB)
used = 25506528 (24.324920654296875MB)
free = 838520096 (799.6750793457031MB)
2.952053477463213% used
... View more
01-31-2018
12:22 PM
regarding the - DATANODE_PID , how to find it ? ( I guess from the worker machine ? )
... View more
01-31-2018
12:17 PM
from hdfs dfsadmin -report
what we can do with this - Missing blocks ( anythuing to do regarding that ? )
we got
<br>hdfs dfsadmin -report
Configured Capacity: 8226130288640 (7.48 TB)
Present Capacity: 8225508617182 (7.48 TB)
DFS Remaining: 8205858544606 (7.46 TB)
DFS Used: 19650072576 (18.30 GB)
DFS Used%: 0.24%
Under replicated blocks: 4
Blocks with corrupt replicas: 0
Missing blocks: 4
Missing blocks (with replication factor 1): 0
... View more
01-31-2018
12:05 PM
@jay please advice what is the best way to check DataNodes are healthy?
... View more
01-31-2018
11:28 AM
by the following API we can check component status on the relevant master machine, my target is to verify that all componet are stop before reboot the machine , but in case ambari server is down then we cant to use the API my quastion is - what the other alternative to check all componet are really stop on master machine ( in case ambari server is down ) curl -u admin:admin -H "X-Requested-By: ambari" -X GET http://master02.sys65.com:8080/api/v1/clusters/HDP/hosts/master02.sys65.com/host_components?fields=HostRoles/state
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop