Member since
01-19-2017
3676
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 593 | 06-04-2025 11:36 PM | |
| 1143 | 03-23-2025 05:23 AM | |
| 572 | 03-17-2025 10:18 AM | |
| 2158 | 03-05-2025 01:34 PM | |
| 1353 | 03-03-2025 01:09 PM |
12-19-2021
11:35 AM
I have executed the command "su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-namenode/../hadoop/sbin/hadoop-daemon.sh start namenode" i have the following warning on the command line: WARNING: Use of this script to start HDFS daemons is deprecated.
WARNING: Attempting to execute replacement "hdfs --daemon start" instead. and the following error on the log: 2021-12-19 14:06:55,554 ERROR namenode.NameNode (NameNode.java:main(1715)) - Failed to start namenode.
org.apache.hadoop.hdfs.server.namenode.EditLogInputException: Error replaying edit log at offset 0. Expected transaction ID was 274473528
at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:226)
at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:160)
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:890)
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1090)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:937)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:910)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
Caused by: org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream$PrematureEOFException: got premature end-of-file at txid 274473527; expected file to go up to 274474058
at org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:197)
at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.skipUntil(EditLogInputStream.java:151)
at org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:179)
at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:213)
... 12 more
2021-12-19 14:06:55,557 INFO util.ExitUtil (ExitUtil.java:terminate(210)) - Exiting with status 1: org.apache.hadoop.hdfs.server.namenode.EditLogInputException: Error replaying edit log at offset 0. Expected transaction ID was 274473528
2021-12-19 14:06:55,558 INFO namenode.NameNode (LogAdapter.java:info(51)) - SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at XX-XXX-XX-XXXX.XXXXX.XX/XX.X.XX.XX One thing, went to check the 3 host that have the journal nodes (nn1, nn2, host3). I i did the following command: cd /hadoop/hdfs/journal/<Cluster_name>/current
ll | wc -l
9653 they all have the same amount of files.
... View more
12-17-2021
08:45 PM
@Koffi From the Ambari UI are you seeing any HDFS alert? ZKFailover Controller or Journalnodes? If so share the logs?
... View more
11-21-2021
08:56 PM
Hello. @mike_bronson7 I also had same WARN message from zookeeper log. Did you solve this problem? My Kafka Cluster: 3 different servers. 1 zookeeper and 1 broker are on each server. 2 brokers are down after this WARN message.
... View more
11-15-2021
02:19 AM
Hi Rish, Please try to run this command and see the logs where it is stuck - yarn logs -applicationId <Application ID>. Refer this doc for other basic yarn commands : https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.0/data-operating-system/content/use_the_yarn_cli_to_view_logs_for_running_applications.html
... View more
10-27-2021
11:16 AM
@Koffi There are a couple of things here you first need to resolve too many open files issue by checking the ulimit $ ulimit -n To increase for the current session depending on the above output ulimit -n 102400 Edit /etc/security/limits.conf to make the change permanent. Then restart the kdc and kadmin depending on your Linux version systemctl # /etc/rc.d/init.d/krb5kdc start
# /etc/rc.d/init.d/kadmin start Then restart Atlas from the Ambari UI Please revert after these actions Geoffrey
... View more
10-25-2021
12:51 AM
@Aayuah, as this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post.
... View more
10-19-2021
07:22 PM
Thanks for the Solution
... View more
09-24-2021
06:32 AM
As it turns out, it seems like it was a combination of things that caused this job to fail. First, we installed a minor java version update: went from jdk8u242-b08 to openjdk-8u292-b10. Second, the developer changed the way the files were being written from asynchronous to synchronous. They were using the CompletableFuture.runAsync class before, and took that out to use just normal writes.
... View more
09-21-2021
07:59 AM
Thanks @Shelton. Reading I found these limitations: - Replicating to and from HDP to Cloudera Manager 7.x is not supported by Replication Manager. * The only option I saw: - Use DistCp to replicate data. - Hive external tables. For information to replicate data, contact Cloudera Support =>? https://docs.cloudera.com/cdp/latest/data-migration/topics/cdpdc-compatibility-matrix-bdr.html
... View more
09-21-2021
12:55 AM
Hi @dv_conan Has any of the replies helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. If you are still experiencing the issue, can you provide the information that is requested?
... View more