Member since
01-19-2017
3676
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 540 | 06-04-2025 11:36 PM | |
| 1078 | 03-23-2025 05:23 AM | |
| 556 | 03-17-2025 10:18 AM | |
| 2088 | 03-05-2025 01:34 PM | |
| 1300 | 03-03-2025 01:09 PM |
11-18-2019
11:21 AM
@mike_bronson7 It looks like you forgot a parameter host_components assuming the ambari server is running on node2 and the thrift Server is on node01 and node02 Delete on Node1 curl -iv -u admin:admin -H "X-Requested-By: ambari" -X DELETE "http://node02:8080/api/v1/clusters/HDP/hosts/node01/host_components/SPARK2_THRIFTSERVER" Delete on Node 2 curl -iv -u admin:admin -H "X-Requested-By: ambari" -X DELETE "http://node02:8080/api/v1/clusters/HDP/hosts/node02/host_components/SPARK2_THRIFTSERVER" HTH
... View more
11-18-2019
10:37 AM
@samarsimha Oh well and good, are you downloading Nifi 1.10.0 from https://nifi.apache.org/download.html released just last week November 4th, 2019 according to the documentation you MUST upgrade to zookeeper version 3.5.5 which is the recommended version. I am sure if you align zookeeper version to the documentation. Task 6624 is automatic but the 6578 must be done manually [NIFI-6624] - Automatically migrate old embedded zookeeper.properties files to the new format [NIFI-6578] - Upgrade zookeeper to 3.5.5 Hope that helps
... View more
11-17-2019
12:04 PM
@samarsimha The simplest answer I can give you if compatibility! HDF/HDP/HCP are packaged applications that have undergone rigorous unitary tests are supposed to "work as" that explains why you cannot upgrade ONLY a component and upgrade minor or Major released go through a battery of test and QA to ensure all the components work properly. In your case Java 7 and 8 are to date the recommended versions the latter being the most preferred. From the matrix link https://supportmatrix.hortonworks.com/ I filtered out the latest version of HDF and No HDP/HDF product has been certified against Java 11 as of now, that could change in the near future but it won't definitely be with your current version of Nifi . So the above could explain your dilemma. Do you have a special use case as to why you want to use Java 11? I am not sure whether Cloudera offers a workaround but if you bought support from Cloudera then you can open a ticket. Happy hadooping
... View more
11-16-2019
01:51 PM
@mokkan Yes having multiple copies is good backup strategy so long as the mount points are physically different disks that don't share disk controllers Please, can you share feedback on the outcome of the earlier procedure? I have not tried adding addition FSIMAGE and Edits location after the creation of the cluster I am wondering whether you could startup the name node unless you formated it which is now a different story altogether Happy hadooping
... View more
11-15-2019
10:50 PM
1 Kudo
@mokkan You are not far from the truth !!! The name NameNode contains the metadata of the HDFS files ie permissions and location etc . This metadata is present in a serialized form inside a single file(fsimage) and edits file that has a log of all the changes made to the file system. The fsimage file is kept both on-disk and in-memory. All changes to file system is reflected in-memory and periodically transferred to disk. Details on how to fetch fsimage and edits file is given here HDFS File System Metadata Backup. If you format the namenode then the basic information about ownership, permissions and location are deleted from namenode directory which is specified in the hdfs-site.xml as dfs.namenode.name.dir the namenode metadata will be gone but your data in the data nodes intact actually formatting a Namenode will not format the Datanode. In the other hand namenode will no longer receieve heartbeats from the datanode nor where your data is as -format assign a new namespace ID to the namenode a *** You will need to change your namespaceID in your datanode to make your datanode work. This will be at /hadoop/hdfs/namenode/current [root@nanyuki current]# cat VERSION #Fri Nov 15 21:29:31 CET 2019 namespaceID=107632589 clusterID=CID-72e79d8b-ea16-4d5c-9920-6b579e5c26b0 cTime=0 storageType=NAME_NODE blockpoolID=BP-2067995211-192.168.0.101-1537740712051 layoutVersion=-63 Once the new namespaceID has been updated on all the datanodes then the namenode will start receiving heartbeats from the datanodes as each datanodes will report during the heartbeat the files it has and eventually that is the information the namenode will use to rebuild its metadata HTH
... View more
11-14-2019
12:10 PM
@deekshant Can you share these 2 files the .out logs the start process and the .log is the most interesting /var/log/hadoop/hdfs/hadoop-hdfs-namenode-<host>.log /var/log/hadoop/hdfs/hadoop-hdfs-namenode-<host>.out Please revert
... View more
11-14-2019
11:28 AM
@fgarcia If you posted something in a thread you and the Admin are the only ones who can delete. Have a look at the attached screenshot as the author I have the option to edit, You simply delete everything an save a blank response there is no delete option except for the Admin Hope that helps please revert
... View more
11-14-2019
10:09 AM
1 Kudo
@mike_bronson7 In an HA Cluster, the Standby and Active namenodes have shared storage managed by the journal node service. HA relies on a failover scenario to swap from StandBy to Active Namenode and as any other system in Hadoop it uses zookeepers. So first thing 3 Zookeepers 3 MUST be online to avoid split-brain-decision, below are the steps to follow On the Active Namenode Run the cat commande against the last-promised-epoch in the same directory as edits_inprogress_000.... # cat last-promised-epoch 31 [example output] On the standby Namenode # cat last-promised-epoch 23 [example output] From the above, you will see that the standby had a lag when the power went off. In your case, you should overwrite the one lagging on the standby after backing up as you already did hoping the Namenode has not been put back online if so do a fresh back before you proceed. SOLUTION Fix the corrupted JN's edits? Instructions to fix that one journal node. 1) Put both NN in safemode ( NN HA) $ hdfs dfsadmin -safemode enter -------output------ Safe mode is ON in Namenode1:8020 Safe mode is ON in Namenode2:8020 2) Save Namespace $ hdfs dfsadmin -saveNamespace -------output------ Save namespace successful for Namenode1:8020 Save namespace successful for Namenode2:8020 3) zip / tar the journal dir from a working JN node and copy it to the non-working JN node to failed node in the same path as the active make sure the file permissions are correct /hadoop/hdfs/journal/<cluster_name>/current 4) Restart HDFS In your case you can start only one Namenode first it will be designated automatically as the active namenode, once it up and running that is fine, the NameNode failover should now occur transparently and the below alerts should gradually disappear Stop and restart the journal nodes This will trigger the syncing of the journalnodes, If you wait for a while you should see your Namenodes up and running all "green" # su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-journalnode/../hadoop/sbin/hadoop-daemon.sh stop journalnode" # su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-journalnode/../hadoop/sbin/hadoop-daemon.sh start journalnode" Start the standby name node After a while, things should be in order Please let me know
... View more
11-13-2019
01:59 PM
@mike_bronson7 Guess what the copies on the working node should do ! remember files on both nodes directories are identical in an HA setup 🙂 Cheers
... View more
11-13-2019
01:23 PM
@mike_bronson7 Yeah, but once you bootstrap the zookeeper election will kick in and one will become the active namenode. It's late here I need to document the process though I have uploaded it once in HCC, I need to redact some information, I could do that tomorrow meanwhile can you backup on both the dead and working namenode the following directory zip all content and copy it to some safe directory /hadoop/hdfs/journal/<Ckuster_name>/current Please revert
... View more