Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1001 | 06-04-2025 11:36 PM | |
| 1568 | 03-23-2025 05:23 AM | |
| 784 | 03-17-2025 10:18 AM | |
| 2821 | 03-05-2025 01:34 PM | |
| 1862 | 03-03-2025 01:09 PM |
11-18-2019
12:34 PM
@svasi read through the documentation in that link and let me know !
... View more
11-18-2019
12:33 PM
@Kou_Bou That shouldn't be a problem, but before getting to the diagnostics can you confirm you have diligently followed this Prepare the Environment newbies always forget that every step is important and hence things look complicated 🙂 Having said that is it a single/multi node cluster? In the logs I see something like host=ambari.server I hope you it's a pseudonymized value else your ambari should have FQDN the output of Linux command $ hostname -f I also see an error NetUtil.py:89 - SSLError: Failed to connect. That is due to python version To resolve that do this you have to set verify=disable by editing the /etc/python/cert-verification.cfg file. [https] verify=platform_default To [https] verify=disable Can you also share these files /etc/ambari-server/conf/ambari.properties /etc/ambari-agent/conf/ambari-agent.ini /etc/hosts Please revert
... View more
11-18-2019
11:57 AM
@divya_thaore Th default location is /var/log/hadoop/hdfs and here you wiill get hadoop-hdfs-datanode-<FQDN>.log hadoop-hdfs-datanode-<FQDN>.out hadoop-hdfs-secondarynamenode-<FQDN>.log hadoop-hdfs-secondarynamenode-<FQDN>.out And it's important too to check the host server messages /var/log/messages Those files should contain the clue all the .out are informative look at the .log most probably the last few entries
... View more
11-18-2019
11:21 AM
@mike_bronson7 It looks like you forgot a parameter host_components assuming the ambari server is running on node2 and the thrift Server is on node01 and node02 Delete on Node1 curl -iv -u admin:admin -H "X-Requested-By: ambari" -X DELETE "http://node02:8080/api/v1/clusters/HDP/hosts/node01/host_components/SPARK2_THRIFTSERVER" Delete on Node 2 curl -iv -u admin:admin -H "X-Requested-By: ambari" -X DELETE "http://node02:8080/api/v1/clusters/HDP/hosts/node02/host_components/SPARK2_THRIFTSERVER" HTH
... View more
11-18-2019
10:37 AM
@samarsimha Oh well and good, are you downloading Nifi 1.10.0 from https://nifi.apache.org/download.html released just last week November 4th, 2019 according to the documentation you MUST upgrade to zookeeper version 3.5.5 which is the recommended version. I am sure if you align zookeeper version to the documentation. Task 6624 is automatic but the 6578 must be done manually [NIFI-6624] - Automatically migrate old embedded zookeeper.properties files to the new format [NIFI-6578] - Upgrade zookeeper to 3.5.5 Hope that helps
... View more
11-17-2019
12:04 PM
@samarsimha The simplest answer I can give you if compatibility! HDF/HDP/HCP are packaged applications that have undergone rigorous unitary tests are supposed to "work as" that explains why you cannot upgrade ONLY a component and upgrade minor or Major released go through a battery of test and QA to ensure all the components work properly. In your case Java 7 and 8 are to date the recommended versions the latter being the most preferred. From the matrix link https://supportmatrix.hortonworks.com/ I filtered out the latest version of HDF and No HDP/HDF product has been certified against Java 11 as of now, that could change in the near future but it won't definitely be with your current version of Nifi . So the above could explain your dilemma. Do you have a special use case as to why you want to use Java 11? I am not sure whether Cloudera offers a workaround but if you bought support from Cloudera then you can open a ticket. Happy hadooping
... View more
11-16-2019
01:51 PM
@mokkan Yes having multiple copies is good backup strategy so long as the mount points are physically different disks that don't share disk controllers Please, can you share feedback on the outcome of the earlier procedure? I have not tried adding addition FSIMAGE and Edits location after the creation of the cluster I am wondering whether you could startup the name node unless you formated it which is now a different story altogether Happy hadooping
... View more
11-15-2019
10:50 PM
1 Kudo
@mokkan You are not far from the truth !!! The name NameNode contains the metadata of the HDFS files ie permissions and location etc . This metadata is present in a serialized form inside a single file(fsimage) and edits file that has a log of all the changes made to the file system. The fsimage file is kept both on-disk and in-memory. All changes to file system is reflected in-memory and periodically transferred to disk. Details on how to fetch fsimage and edits file is given here HDFS File System Metadata Backup. If you format the namenode then the basic information about ownership, permissions and location are deleted from namenode directory which is specified in the hdfs-site.xml as dfs.namenode.name.dir the namenode metadata will be gone but your data in the data nodes intact actually formatting a Namenode will not format the Datanode. In the other hand namenode will no longer receieve heartbeats from the datanode nor where your data is as -format assign a new namespace ID to the namenode a *** You will need to change your namespaceID in your datanode to make your datanode work. This will be at /hadoop/hdfs/namenode/current [root@nanyuki current]# cat VERSION #Fri Nov 15 21:29:31 CET 2019 namespaceID=107632589 clusterID=CID-72e79d8b-ea16-4d5c-9920-6b579e5c26b0 cTime=0 storageType=NAME_NODE blockpoolID=BP-2067995211-192.168.0.101-1537740712051 layoutVersion=-63 Once the new namespaceID has been updated on all the datanodes then the namenode will start receiving heartbeats from the datanodes as each datanodes will report during the heartbeat the files it has and eventually that is the information the namenode will use to rebuild its metadata HTH
... View more
11-14-2019
12:10 PM
@deekshant Can you share these 2 files the .out logs the start process and the .log is the most interesting /var/log/hadoop/hdfs/hadoop-hdfs-namenode-<host>.log /var/log/hadoop/hdfs/hadoop-hdfs-namenode-<host>.out Please revert
... View more
11-14-2019
11:28 AM
@fgarcia If you posted something in a thread you and the Admin are the only ones who can delete. Have a look at the attached screenshot as the author I have the option to edit, You simply delete everything an save a blank response there is no delete option except for the Admin Hope that helps please revert
... View more