Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1970 | 06-15-2020 05:23 AM | |
| 16055 | 01-30-2020 08:04 PM | |
| 2108 | 07-07-2019 09:06 PM | |
| 8246 | 01-27-2018 10:17 PM | |
| 4671 | 12-31-2017 10:12 PM |
11-23-2017
03:42 PM
hi Jay , the hostnames are corrects on the two nodes , regarding to - /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid , is it necessary to remove these files inspite we reboot the two workers machines ?
... View more
11-23-2017
03:28 PM
in our Amabri cluster we see that: all data nodes are up but in dashboard we see only 3 from 5 are up so - how / why dashboard see only 3 from 5 ? what need to check or sync here? * just want to say that two hosts ( workers machine ) was added recently to the ambari cluster , any way we restart the ambari-agent and reboot these servers , but still the status on dasboard is 3/5
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
11-23-2017
01:44 PM
we solved this issue by do the following: we notice that no SSH from the host with ( ambari-server ) to all machines in the cluster , so we copy the public key from the master machine ( ambari-server ) to each machine in the cluster , and restart the node that ( Standby NameNode ) was downe
... View more
11-23-2017
11:59 AM
please advice what is the solution for this problem we are getting this errors ( from the log under /var/lib/ambari-agent/data ) 2017-11-13 11:49:49,056 - Getting jmx metrics from NN failed. URL: http://master01.sys56.com:50070/jmx?qry=Hadoop:service=NameNode,name=FSNamesystem
Traceback (most recent call last):ExecutionFailed: Execution of 'curl -s 'http://master01.sys56.com:50070/jmx?qry=Hadoop:service=NameNode,name=FSNamesystem' we also tried this solution but without results: https://community.hortonworks.com/articles/114869/how-to-resolve-namenode-nn1-is-not-listed-as-activ.html and when we try it manual curl -s 'http://master01.sys56:50070/jmx?qry=Hadoop:service=NameNode,name=FSNamesystem' we get echo $?
7
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
11-22-2017
06:06 PM
do you have some updates ?
... View more
11-22-2017
06:06 PM
do you have some updates?
... View more
11-22-2017
05:26 PM
f you don't run the filesystem checker, the apparent corruption in the filesystem may get worse. Unchecked, this can lead to data corruption or at the unlikely worst destruction of the filesystem. During the filesystem check, file structures within the filesystem will be checked, and if necessary repaired. The repair takes no account of content; it's all about making sure the filesystem is self-consistent. If you run e2fsck -y /dev/sdc you have no opportunity to validate the corrections being applied. On the other hand if you run e2fsck -n /dev/sdc you can see what would happen without it actually being applied, and if you run e2fsck /dev/sdc you will be asked each time a significant correction needs to be applied. In summary
If you ignore the warning and do nothing, over time you may lose your data If you run with -y you have no option to review the potentially destructive changes, and you may lose your data If you run with -n you will not fix any errors, and over time may lose your data, but you will get to review the set of changes that would be made If you run with no special flag you will be prompted to fix relevant errors, and you can decide for each whether you are going to need direct professional assistance Recommendation
Run e2fsck -n /dev/sdc to review the errors Decide whether this merits a subsequent e2fsck /dev/sdc (or possibly e2fsck -y /dev/sdc ) or whether you would prefer to obtain direct professional assistance
... View more
11-22-2017
01:11 PM
+1 for the answer , I will test it on my host
... View more
11-22-2017
12:09 PM
as all know we can delete the worker/kafka machine from the cluster but configuration on the host still exist our target is: full host uninstall ( include re-filesystem , delete rpm's , delete users , file , conf etc ) , and then full new installation by API commands to join host to the cluster what we do until now is that: delete the worker07 from the ambari cluster re-create file-system on all disks as /dev/sdc /dev/sdd , etc but the big problem now is how to un install the rest configuration as users , rpm's and other stuff please advice how to continue ? what are the doc if exist for this proccess?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
11-21-2017
05:27 PM
the main logs are under - /var/log/hadoop-yarn/yarn
... View more