Member since
01-19-2017
3681
Posts
633
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1612 | 06-04-2025 11:36 PM | |
| 2072 | 03-23-2025 05:23 AM | |
| 984 | 03-17-2025 10:18 AM | |
| 3742 | 03-05-2025 01:34 PM | |
| 2580 | 03-03-2025 01:09 PM |
06-08-2022
04:08 AM
1 Kudo
Hi Andrea, Great to see that it has been found now and thanks for marking the post as answered. All the best, Miklos
... View more
05-30-2022
01:30 PM
What are the answers to this question since 2022? From what I'm seeing, links to repos of HDP 3.1.4 and earlier are behind a paywall aswell. Is there no free/opensource version of HDP/CDP anymore? And will never be?
... View more
05-27-2022
12:07 AM
Thank you!!!
... View more
05-26-2022
12:07 AM
@George-Megre The Master nodes firstly are not meant for launching tasks or interaction. Edge or gateway nodes are used to run client applications and cluster administration tools. Setup of Edge/gateway node is similar to any Hadoop node except no Hadoop cluster services runs on the GW/Edge nodes they are mere entry points and connection gateway to the Master Components like HDFS (NN) HBASE etc provide you have installed the client libraries. In your case, I am sure you have the Hbase client/gateway roles on the 3 nodes and not on the master noded. The HBase client role gives the connectivity to Hbase but again I don't see why you would like to connect /initiate the HBase shell from the master node? Geoffrey
... View more
05-19-2022
02:47 PM
i got this error when enable the hbase backup as below on hbase-site.xml <property> <name>hbase.backup.enable</name> <value>true</value> </property> <property> <name>hbase.master.logcleaner.plugins</name> <value>org.apache.hadoop.hbase.backup.master.BackupLogCleaner,...</value> </property> <property> <name>hbase.procedure.master.classes</name> <value>org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager,...</value> </property> <property> <name>hbase.procedure.regionserver.classes</name> <value>org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager,...</value> </property> <property> <name>hbase.coprocessor.region.classes</name> <value>org.apache.hadoop.hbase.backup.BackupObserver,...</value> </property> <property> <name>hbase.master.hfilecleaner.plugins</name> <value>org.apache.hadoop.hbase.backup.BackupHFileCleaner,...</value> </property> <property> <name>hbase.cluster.distributed</name> <value>false</value> </property> <property> <name>hbase.tmp.dir</name> <value>./tmp</value> </property> <property> <name>hbase.unsafe.stream.capability.enforce</name> <value>false</value> </property> </configuration>
... View more
04-05-2022
07:43 AM
@araujo do you have any suggestion for this case ?
... View more
03-29-2022
12:17 PM
1 Kudo
Hi @Shelton, sorry for the delay in responding. I finally found the solution to my problem. I need to specify the column properties and then Compute Stats will work perfectly. ```sql ALTER TABLE myschema.mytable ADD COLUMNS (mycolumn1 TIMESTAMP NULL COMPRESSION DEFAULT_COMPRESSION, mycolumn2 TIMESTAMP NULL COMPRESSION DEFAULT_COMPRESSION); ```
... View more
01-16-2022
09:53 AM
@Koffi This is typical of a rogue process hasn't reslease the Caused by: java.net.BindException: Address already in use You will need to run # kill -9 5356 The restart the NN that should resolve the issue
... View more
01-11-2022
09:44 AM
Hi, even I am facing the same trouble when trying to load Ambari web page, it shows 502 error. It was working fine 2 days back but the error is showing from today when I tried to login in the morning. I have tried all the replies stated above, none worked. I will be grateful if someone can resolve this issue at the earliest. Many Thanks
... View more
12-19-2021
11:35 AM
I have executed the command "su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-namenode/../hadoop/sbin/hadoop-daemon.sh start namenode" i have the following warning on the command line: WARNING: Use of this script to start HDFS daemons is deprecated.
WARNING: Attempting to execute replacement "hdfs --daemon start" instead. and the following error on the log: 2021-12-19 14:06:55,554 ERROR namenode.NameNode (NameNode.java:main(1715)) - Failed to start namenode.
org.apache.hadoop.hdfs.server.namenode.EditLogInputException: Error replaying edit log at offset 0. Expected transaction ID was 274473528
at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:226)
at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:160)
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:890)
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1090)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:937)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:910)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
Caused by: org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream$PrematureEOFException: got premature end-of-file at txid 274473527; expected file to go up to 274474058
at org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:197)
at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.skipUntil(EditLogInputStream.java:151)
at org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:179)
at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:213)
... 12 more
2021-12-19 14:06:55,557 INFO util.ExitUtil (ExitUtil.java:terminate(210)) - Exiting with status 1: org.apache.hadoop.hdfs.server.namenode.EditLogInputException: Error replaying edit log at offset 0. Expected transaction ID was 274473528
2021-12-19 14:06:55,558 INFO namenode.NameNode (LogAdapter.java:info(51)) - SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at XX-XXX-XX-XXXX.XXXXX.XX/XX.X.XX.XX One thing, went to check the 3 host that have the journal nodes (nn1, nn2, host3). I i did the following command: cd /hadoop/hdfs/journal/<Cluster_name>/current
ll | wc -l
9653 they all have the same amount of files.
... View more