Member since
03-19-2020
2
Posts
0
Kudos Received
0
Solutions
10-13-2020
08:45 AM
Hi all, We currently have corrupted files (missing blocks) on our cluster. The active namenode is finding 2 out of 3 replicas on most of those files. It seems the hdfs service is not replicating on its own, so we went through existing commands to attempt to fix things manually, and we found the hdfs "fsck" command here: https://hadoop.apache.org/docs/r2.8.4/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html#fsck From the documentation alone, I don't understand what the "-delete" option does. Does it delete all blocks of the file ? Does it delete the missing blocks (their metadata) ? I'm a bit confused. Does anyone have experience with this command and could help me clarify what it does? cbfr
... View more
Labels:
- Labels:
-
HDFS
03-19-2020
01:00 PM
Hi all,
We've recently upgraded our cluster from Cloudera Enterprise 5.13.1 to 5.16.1, and it seems the value of the Impala Query Monitoring Failures Threshold cannot be changed anymore.
By default, the Warning and Critical fields are set to "Never" and "Any" respectively, but when we try to switch them to "Specify" and set a value, we get an error that says "<value> is less than the minimum allowed value 0". We've tried typing in all sorts of values, including zero, and negatives...none are accepted. We know for sure this error didn't exist in the previous version.
To us, it appears to be some sort of logical error in the value check, but maybe there's some other reason for this error we haven't thought of...
Does anyone have the same issue, or can think of a root cause or fix ?
cbfr
... View more
Labels: