Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1918 | 06-15-2020 05:23 AM | |
| 15461 | 01-30-2020 08:04 PM | |
| 2071 | 07-07-2019 09:06 PM | |
| 8109 | 01-27-2018 10:17 PM | |
| 4571 | 12-31-2017 10:12 PM |
11-22-2017
06:06 PM
do you have some updates ?
... View more
11-22-2017
06:06 PM
do you have some updates?
... View more
11-22-2017
05:26 PM
f you don't run the filesystem checker, the apparent corruption in the filesystem may get worse. Unchecked, this can lead to data corruption or at the unlikely worst destruction of the filesystem. During the filesystem check, file structures within the filesystem will be checked, and if necessary repaired. The repair takes no account of content; it's all about making sure the filesystem is self-consistent. If you run e2fsck -y /dev/sdc you have no opportunity to validate the corrections being applied. On the other hand if you run e2fsck -n /dev/sdc you can see what would happen without it actually being applied, and if you run e2fsck /dev/sdc you will be asked each time a significant correction needs to be applied. In summary
If you ignore the warning and do nothing, over time you may lose your data If you run with -y you have no option to review the potentially destructive changes, and you may lose your data If you run with -n you will not fix any errors, and over time may lose your data, but you will get to review the set of changes that would be made If you run with no special flag you will be prompted to fix relevant errors, and you can decide for each whether you are going to need direct professional assistance Recommendation
Run e2fsck -n /dev/sdc to review the errors Decide whether this merits a subsequent e2fsck /dev/sdc (or possibly e2fsck -y /dev/sdc ) or whether you would prefer to obtain direct professional assistance
... View more
11-22-2017
01:11 PM
+1 for the answer , I will test it on my host
... View more
11-22-2017
12:09 PM
as all know we can delete the worker/kafka machine from the cluster but configuration on the host still exist our target is: full host uninstall ( include re-filesystem , delete rpm's , delete users , file , conf etc ) , and then full new installation by API commands to join host to the cluster what we do until now is that: delete the worker07 from the ambari cluster re-create file-system on all disks as /dev/sdc /dev/sdd , etc but the big problem now is how to un install the rest configuration as users , rpm's and other stuff please advice how to continue ? what are the doc if exist for this proccess?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
11-21-2017
05:27 PM
the main logs are under - /var/log/hadoop-yarn/yarn
... View more
11-21-2017
04:58 PM
dfs.datanode.data.dir is --> /wrk/sdb/hadoop/hdfs/data,/wrk/sdc/hadoop/hdfs/data,/wrk/sdd/hadoop/hdfs/data,/wrk/sde/hadoop/hdfs/data,/wrk/sdf/hadoop/hdfs/data,/wrk/sdg/hadoop/hdfs/data,/wrk/sdh/hadoop/hdfs/data,/wrk/sdi/hadoop/hdfs/data,/wrk/sdj/hadoop/hdfs/data,/wrk/sdk/hadoop/hdfs/data
... View more
11-21-2017
04:56 PM
dfs.datanode.data.dir is ok ( all the worker machines that are works defined with this value )
... View more
11-21-2017
04:55 PM
yes the datanode is fail only on the new node ( this node was delete from the cluster before one month , and now we add this node to the cluster again )
... View more
11-21-2017
04:33 PM
yes all the disks are mounted
... View more