Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1917 | 06-15-2020 05:23 AM | |
| 15456 | 01-30-2020 08:04 PM | |
| 2070 | 07-07-2019 09:06 PM | |
| 8100 | 01-27-2018 10:17 PM | |
| 4569 | 12-31-2017 10:12 PM |
10-26-2017
01:33 PM
hi, when we start the namenode we get Port in use how to know which port is in use ? ERROR namenode.NameNode (NameNode.java:main(1774)) - Failed to start namenode.
java.net.BindException: Port in use: master01.pp.com:50070
at org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:983)
at org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1006)
at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1063)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:920)
at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:170)
at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:933)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:746)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:992)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:976)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1701)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1769)
Caused by: java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
at org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:971)
at org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1002)
... 9 more
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
10-25-2017
07:19 PM
thx for this details , I opened the page and I see many spark app opened , so do you think if I will restart the spark it will solved this isshue
... View more
10-25-2017
11:02 AM
how to know who eat yarn memory ? I ask because we have exactly two ambari cluster on one ambari cluster yarn memory is almost 100% on the second yarn memory is 50% so how to know who eat the yarn memory on the first cluster?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
-
Apache YARN
10-25-2017
09:58 AM
Jay not in this issue , but I will happy to get tour answer about my quastion from - https://community.hortonworks.com/questions/142356/how-to-recover-the-standby-name-node-in-ambari-clu.html
... View more
10-25-2017
09:27 AM
so on each worker machine we have 32G ( lets say we allocate 7 to the OS ) so we have 25G , in my system we have 5 workers so this mean 25*5=125 , and this is actually what configured , am I corect ? ( we have in the cluster 3 master machines , 5 workers machines ) , in that case we need to increase each worker memory to at least 50G - am I correct ? and after that set the value of yarn.nodemanager.resource.memory-mb to 260G for example
... View more
10-25-2017
09:18 AM
I update the question with more info ( like which application are runing under yarn and machine memory )
... View more
10-25-2017
09:08 AM
ok thank you waiting for your answer
... View more
10-25-2017
09:04 AM
another question once I change this value - yarn.nodemanager.resource.memory-mbthe , then yarn memory value is now 100% will immediately decrease ? or need to do some other action to refresh it ?
... View more
10-25-2017
08:48 AM
do you mean that I need to change the yarn.nodemanager.resource.memory-m to other value? if yes how to calculate this value ? ( or maybe we need to increase this value for example to 200G )
... View more
10-25-2017
08:05 AM
what could be the reason that yarn memory is very high? any suggestion to verify this? we have this value from ambari yarn.scheduler.capacity.root.default.user-limit-factor=1 yarn.scheduler.minimum-allocation-mb=11776 yarn.scheduler.maximum-allocation-mb=122880 yarn.nodemanager.resource.memory-mb=120G
/usr/bin/yarn application -list -appStates RUNNING | grep RUNNING
Thrift JDBC/ODBC Server SPARK hive default RUNNING UNDEFINED 10%
Thrift JDBC/ODBC Server SPARK hive default RUNNING UNDEFINED 10%
mcMFM SPARK hdfs default RUNNING UNDEFINED 10%
mcMassRepo SPARK hdfs default RUNNING UNDEFINED 10%
mcMassProfiling SPARK hdfs default RUNNING UNDEFINED
free -g
total used free shared buff/cache available
Mem: 31 24 0 1 6 5
Swap: 7 0 7
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
-
Apache YARN