Member since
03-25-2019
9
Posts
1
Kudos Received
0
Solutions
07-25-2020
12:18 AM
Step Context Start Time Duration Actions Checking that NameNode Data Directories on host either do not exist, or are writable and empty. Can optionally clear directories. Process host-validate-writable-empty-dirs (id=55074) on host (id=167) exited with 0 and expected 0 Jul 25, 4:13:54 AM 1.52s Checking that JournalNode Edits Directory on host either does not exist, or is writable and empty. Can optionally clear directory. Process host-validate-writable-empty-dirs (id=55075) on host (id=167) exited with 0 and expected 0 Jul 25, 4:13:56 AM 1.43s Stopping services that depend on HDFS service HDFS. All services successfully stopped. Cluster LIVE Jul 25, 4:13:58 AM 53.26s Transitioning NameNode on host to active mode and NameNode on host to standby mode. Successfully failed over. Jul 25, 4:14:51 AM 23.37s Stopping roles of HDFS service HDFS. Successfully executed command Stop on service HDFS HDFS Jul 25, 4:15:15 AM 9.46s Putting NameNode on host into safe mode. The NameNode successfully entered Safemode. Jul 25, 4:15:24 AM 7.12s Saving namespace of NameNode on host Command aborted because of exception: Command timed-out after 90 seconds
... View more
07-25-2020
12:16 AM
Hi
we are migrating namenode role to another server . while we doing the getting Saving namesace timeout issue
Saving namespace of NameNode on host XXX
Command aborted because of exception: Command timed-out after 90 seconds
... View more
Labels:
- Labels:
-
Apache Hadoop
07-24-2020
12:14 PM
Hi we enabled HA . we are also getting same below error, How to fix this issue The health test result for NAME_NODE_HA_CHECKPOINT_AGE has become concerning: The filesystem checkpoint is 3 minute(s), 36 second(s) old. This is 6.00% of the configured checkpoint period of 1 hour(s). 2,046,490 transactions have occurred since the last filesystem checkpoint. This is 204.65% of the configured checkpoint transaction target of 1,000,000. Warning threshold: 200.00%.
... View more
01-30-2020
01:27 AM
Hi Cloudera Team we have upgraded cloudera 5.11 to 6.3.1 we have orc table in hive after upgradation,not able to run the select query against in the table Caused by: java.lang.ArrayIndexOutOfBoundsException: 6 at org.apache.orc.OrcFile$WriterVersion.from(OrcFile.java:145) at org.apache.orc.impl.OrcTail.getWriterVersion(OrcTail.java:74) at org.apache.orc.impl.ReaderImpl.<init>(ReaderImpl.java:385) at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.<init>(ReaderImpl.java:62) at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:89) at org.apache.hadoop.hive.ql.io.orc.VectorizedOrcInputFormat.getRecordReader(VectorizedOrcInputFormat.java:186) at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.createVectorizedReader(OrcInputFormat.java:1672) at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRecordReader(OrcInputFormat.java:1683) at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.<init>(CombineHiveRecordReader.java:68) ... 16 more Did you any fix for this issue Regards S.Abinanth
... View more