Member since
01-19-2017
3681
Posts
633
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1605 | 06-04-2025 11:36 PM | |
| 2071 | 03-23-2025 05:23 AM | |
| 984 | 03-17-2025 10:18 AM | |
| 3734 | 03-05-2025 01:34 PM | |
| 2570 | 03-03-2025 01:09 PM |
06-15-2019
02:16 PM
@choppadandi vamshi krishna You can only create a materialized view on transactional tables. where changes in the base table will be logged and there is a refresh mechanism to update the materialized view whenever the view is queried Please, can you check whether the base table is transactional? Below are steps to help you determine that. The assumption below is your table cars is in the default database. # hive -e "describe extended <Database>.<tablename>;" | grep "transactional=true" If you get an output with the string that you grep for, then the table is transactional Example: #hive -e "describe extended default.cars;" | grep "transactional=true" Else Alter the flat table to make it transactional. ALTER TABLE cars SET TBLPROPERTIES ('transactional'='true'); Then try creating the materialized view again it should succeed Please revert
... View more
06-14-2019
07:30 PM
@Michael Bronson Is all good?
... View more
06-14-2019
07:57 AM
@Michael Bronson Get the journal node that is healthy (active namenode) aftter saving the Namespace you also wipe out the other journal node which had edits_inprogress_0000000000018783114.empty remember to backup/zip all the journalnodes as good practice Once you have copied the good to all the 3 destinations proceed and when you start the namenode after staring the journalnode one should become active and the other standby thanks to ZKFailover.
... View more
06-14-2019
06:34 AM
@Michael Bronson Can you confirm all the other 2 journal nodes have the last-promised-epoch of 30? That when the failure occurred, it's okay to replace the contents of the /hadoop/hdfs/journal/hdfsha/current/* with the contents of the good(active) namenode. Then proceed with the subsequent steps
... View more
06-13-2019
10:04 PM
1 Kudo
@Shashank Naresh Great news !! If you found this answer addressed your question, please take a moment to log in and click the "accept" link on the answer. That would be a great help to Community users to find the solution quickly for these kinds of errors.
... View more
06-13-2019
10:02 PM
@Shashank Naresh What do you mean by choosing only network adapter? There are different adapter al are network adaoters can you elaborate
... View more
06-13-2019
09:38 PM
2 Kudos
@Michael Bronson Yes its possible to recover from this situation, which happens sometimes in a Namenode HA setup. Journal nodes are distributed system to store edits. Active Namenode as a client writes edits to journal nodes and commit only when it's replicated to all the journal nodes in a distributed system. Standby NN needs to read data from edits to be in sync with Active one. It can read from any of the replica stored on journal nodes. ZKFC will make sure that only one Namenode should be active at a time. However, when a failover occurs, it is still possible that the previous Active NameNode could serve read requests to clients, which may be out of date until that NameNode shuts down when trying to write to the JournalNodes. For this reason, we should configure fencing methods even when using the Quorum Journal Manager. To work with a fencing journal manager uses epoc numbers. Epoc numbers are an integer which always gets increased and have unique value once assigned. Namenode generates epoc number using a simple algorithm and uses it while sending RPC requests to the QJM. When you configure Namenode HA, the first Active Namenode will get epoc value 1. In case of failover or restart, epoc number will get increased. The Namenode with higher epoc number is considered as newer than any Namenode with an earlier epoc number. Now let's proceed with the real case, note the hostname of the healthy namenode You will need to proceed as follows assuming you are logged on as root here is How do I fix one corrupted JN's edits? # su - hdfs 1) Put both NN in safemode ( NN HA) $ hdfs dfsadmin -safemode enter Sample output Safe mode is ON in namenode1/xxx.xxx.xx.xx:8020
Safe mode is ON in namenode2/xxx.xxx.xx.xx:8020 2) Save Namespace $ hdfs dfsadmin -saveNamespace 3) On the non-working name node change directory to /hadoop/hdfs/journal/hdfsha/current/* Get the epoch and note the number it should be lower than the in the working name node cross check $ cat last-promised-epoch 4) On the non-working name node backup all the files in journal dir /hadoop/hdfs/journal/hdfsha/current/* they should look like below -rw-r--r-- 1 hdfs hadoop 1019566 Jun 10 09:45 edits_0000000000000928232-0000000000000935461
-rw-r--r-- 1 hdfs hadoop 1014516 Jun 10 15:45 edits_0000000000000935462-0000000000000942657
-rw-r--r-- 1 hdfs hadoop 1017540 Jun 10 21:46 edits_0000000000000942658-0000000000000949874
-rw-r--r-- 1 hdfs hadoop 1048576 Jun 10 23:36 edits_0000000000000949875-0000000000000952088
-rw-r--r-- 1 hdfs hadoop 1048576 Jun 13 22:27 edits_inprogress_0000000000000952089
-rw-r--r-- 1 hdfs hadoop 277083 Jun 10 21:46 fsimage_0000000000000949874
-rw-r--r-- 1 hdfs hadoop 62 Jun 10 21:46 fsimage_0000000000000949874.md5
-rw-r--r-- 1 hdfs hadoop 276740 Jun 13 22:13 fsimage_0000000000000952088
-rw-r--r-- 1 hdfs hadoop 62 Jun 13 22:13 fsimage_0000000000000952088.md5
-rw-r--r-- 1 hdfs hadoop 7 Jun 13 22:13 seen_txid
-rw-r--r-- 1 hdfs hadoop 206 Jun 13 22:13 VERSION 5) While in the current directory backup all the files note the (.) indicating current dir $ tar -zcvf editsbck.tar.gz . 6) Move the editsbck.tar.gz to a safe location $ scp editsbck.tar.gz /home/bronson 7) Backup or move any directory therein eg $ mv paxos paxos.bck 😎 Delete all files in the /hadoop/hdfs/journal/hdfsha/current/ on the bad node remember you have a backup editsbck.tar.gz $ rm -rf /hadoop/hdfs/journal/hdfsha/current/* 9) zip or tar the journal dir from a working JN node /hadoop/hdfs/journal/hdfsha/current/* $ tar -zcvf good_editsbck.tar.gz 10) Copy and unzip/untar the good_editsbck.tar.gz to the non-working JN node to same path as the working namenode /hadoop/hdfs/journal/hdfsha/current/ # scp good_editsbck.tar.gz root@namenode2:/hadoop/hdfs/journal/hdfsha/current/ 11) Unzip the files # tar xvzf something.tar.gz -C /hadoop/hdfs/journal/hdfsha/current/ 12) Chown ownership to hdfs the -R recursive in case you have directories # chown -R hdfs:hadoop /hadoop/hdfs/journal/hdfsha/current/* Log on the unhealthy name node 13) Restarting the journal nodes Start all 3 journal nodes note I run the command like root if the were running stop you will see journal node running as process xxxx. Stop it first. 14) Stopping journal node # su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-journalnode/../hadoop/sbin/hadoop-daemon.sh stop journalnode" 15) Starting journal node # su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-journalnode/../hadoop/sbin/hadoop-daemon.sh start journalnode" Restart HDFS from Ambari UI After some minutes the alerts should go and you should see a healthy Active & standby Namenodes. All should be fine now, the NameNode failover should now occur transparently and the below alerts should gradually disappear HTH
... View more
06-08-2019
05:53 AM
@Adil BAKKOURI When I looked @Jay's answer something struck me about the same question I had responded to last night. This a duplicate thread, please try to endeavor and desist from opening multiple threads as members won't have the history so as to eliminate answers already provided. https://community.hortonworks.com/questions/247498/errno-111-connexion-failed-between-hosts-only-hdfs.html For example, now Jay would have capitalized on my previous answer to provide maybe alternative answers.
... View more
06-07-2019
06:11 PM
@kailash salton That is not the backup command I am interested in? Those commands are for listing and checking for the backup $ hbase backup create "full" hdfs://xxxxxxxxxxxxxxxxxxx Show the command and the location of the backup directory
... View more
06-07-2019
03:53 PM
@kailash salton Can you share the exact backup command? My interest is the backup dir !
... View more