Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1970 | 06-15-2020 05:23 AM | |
| 16071 | 01-30-2020 08:04 PM | |
| 2108 | 07-07-2019 09:06 PM | |
| 8250 | 01-27-2018 10:17 PM | |
| 4676 | 12-31-2017 10:12 PM |
10-25-2017
09:08 AM
ok thank you waiting for your answer
... View more
10-25-2017
09:04 AM
another question once I change this value - yarn.nodemanager.resource.memory-mbthe , then yarn memory value is now 100% will immediately decrease ? or need to do some other action to refresh it ?
... View more
10-25-2017
08:48 AM
do you mean that I need to change the yarn.nodemanager.resource.memory-m to other value? if yes how to calculate this value ? ( or maybe we need to increase this value for example to 200G )
... View more
10-25-2017
08:05 AM
what could be the reason that yarn memory is very high? any suggestion to verify this? we have this value from ambari yarn.scheduler.capacity.root.default.user-limit-factor=1 yarn.scheduler.minimum-allocation-mb=11776 yarn.scheduler.maximum-allocation-mb=122880 yarn.nodemanager.resource.memory-mb=120G
/usr/bin/yarn application -list -appStates RUNNING | grep RUNNING
Thrift JDBC/ODBC Server SPARK hive default RUNNING UNDEFINED 10%
Thrift JDBC/ODBC Server SPARK hive default RUNNING UNDEFINED 10%
mcMFM SPARK hdfs default RUNNING UNDEFINED 10%
mcMassRepo SPARK hdfs default RUNNING UNDEFINED 10%
mcMassProfiling SPARK hdfs default RUNNING UNDEFINED
free -g
total used free shared buff/cache available
Mem: 31 24 0 1 6 5
Swap: 7 0 7
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
-
Apache YARN
10-25-2017
04:45 AM
how to delete worker or kafka node from cluster by manual way because some times we cant delete the node from the ambari ( not clear why ) , and in this case we want to do the deletion by manual approach ( for example delete folder / users / rpm manual from the Linux machine )
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
-
Apache Kafka
10-24-2017
09:20 AM
hi jay
just to clarify your instructions ( and regarding that standby name node on master01 and active node on master02
can you approve this steps
su - hdfs hdfs dfsadmin -safemode leave ( on master02 ) cp -rp /hadoop/hdfs/journal/hdfsha/current /hadoop/hdfs/journal/hdfsha/current.orig ( on master02 ) rm -f /hadoop/hdfs/journal/hdfsha/current/* ( on master02 )
hdfs namenode -bootstrapStandby ( on master01 )
... View more
10-24-2017
08:27 AM
we found this workaround - hadoop namenode -recover , is it other soultion for our problem ?
... View more
10-24-2017
06:08 AM
hi Jay still waiting to your answer , do you mean to do hdfs dfsadmin -safemode leave on the namenode that is runing ( master02 ) ? ( standby namenode is on master01 ) , second - Then take a backup of the directory "/hadoop/hdfs/namenode/current" should be on the active name node ? , I ask this because it is more logic to do this on standby ( master01 machine )
... View more
10-23-2017
06:50 PM
my active node is on master02 so I need to do the steps on master02 machine? ( include backup )
... View more
10-23-2017
06:22 PM
can you please show me how to bring safe mode out with forceExit on Active NN , bootstrapStandby.
... View more