Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2721 | 04-27-2020 03:48 AM | |
| 5281 | 04-26-2020 06:18 PM | |
| 4446 | 04-26-2020 06:05 PM | |
| 3570 | 04-13-2020 08:53 PM | |
| 5377 | 03-31-2020 02:10 AM |
07-06-2017
10:05 AM
@Rachel Rui Liu
Good to know that you are able to run the mpack upgrade file now, Without any issue. If you are not facing any issue now then can you please mark this thread as answered by clicking on the "Accept" link.
... View more
07-06-2017
09:35 AM
- I see similar post here: https://community.hortonworks.com/questions/106214/hdf-cluster-implementation-error.html#answer-111777
... View more
07-06-2017
09:33 AM
@Rachel Rui Liu
I see your similar post here: https://community.hortonworks.com/questions/110402/issue-when-upgrade-hdf212-to-hdf30.html#answer-111776
... View more
07-06-2017
09:32 AM
2 Kudos
@Rachel Rui Liu You are getting parsing error: [org.xml.sax.SAXParseException; systemId: file:/var/lib/ambari-server/resources/stacks/HDF/2.0/upgrades/nonrolling-upgrade-2.0.xml; lineNumber: 24; columnNumber: 22; cvc-complex-type.2.4.a: Invalid content was found starting with element 'downgrade-allowed'. One of '{upgrade-path, order}' is expected.] at javax.xml.bind.helpers.AbstractUnmarshallerImpl.createUnmarshalException(AbstractUnmarshallerImpl.java:335) Above is basically a parsing error. . So to verify the XML "/var/lib/ambari-server/resources/stacks/HDF/2.0/upgrades/nonrolling-upgrade-2.0.xml" OR "/var/lib/ambari-server/resources/stacks/HDF/2.0/upgrades/nonrolling-upgrade-2.1.xml" please do the following: 1. Open the Online XML Parsing URL: http://www.utilities-online.info/xsdvalidation/#.WVxX39OGPx4
2. Now Paste your XML File "/var/lib/ambari-server/resources/stacks/HDF/2.0/upgrades/nonrolling-upgrade-2.0.xml" OR "/var/lib/ambari-server/resources/stacks/HDF/2.0/upgrades/nonrolling-upgrade-2.1.xml" content there. 3. Ambari validates the amove XML file using the XSD present here:
https://github.com/apache/ambari/blob/release-2.5.1/ambari-server/src/main/resources/upgrade-pack.xsd#L420-L424 4. So copy the XSD content and then paste it in the XSD section of the http://www.utilities-online.info/xsdvalidation/#.WVxX39OGPx4 link and then click on "Validate XML against XSD" button and then see the error if your XML is valid or not? Example: Here i am trying to validate "hdf-ambari-mpack-2.1.2.0-10.tar.gz" based XML file 'nonrolling-upgrade*.xml' against Ambari 2.5 XSD. This error may occur when you try to apply old "hdf-ambari-mpack" (i see you are using hdf-ambari-mpack-2.1.2.0-10.tar.gz), Please try the latest mpack "hdf-ambari-mpack-3.0.0.0-453.tar.gz". . http://public-repo-1.hortonworks.com/HDF/centos7/3.x/updates/3.0.0.0/tars/hdf_ambari_mp/hdf-ambari-mpack-3.0.0.0-453.tar.gz
https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.0.0/bk_release-notes/content/ch_hdf_relnotes.html . . As you are upgrading from HDF 2.1 ro HDF3.0 so suspecting that you might have not used the correct mpack tar of "hdf-ambari-mpack-3.0.0.0-453.tar.gz" please double check the same and compare the XML difference on your local filesystem to find out if by any chance the "nonrolling-upgrade-2.1.xml" is not from old version.
... View more
07-06-2017
07:21 AM
1 Kudo
@jack jack Surely 8/7 is not right. But i was thinking that When you deleted a host from ambari cluster the host count became 7 , But before deleting the host did you Stop the DataNode running on that host? Else from NameNode perspective the Running DataNodes will be still 8 because that DataNode is still reporting to the NameNode. . Because DataNodes are reports to the NameNode, So you should check the NameNode UI to findout the exact number of DataNode. http://$NAMENODE:50070/dfshealth.html#tab-overview Find the "Live DataNodes" . Ambari simply grabs the LiveDataNode count from the NameNode JMX http://$NAMENODE:50070/jmx?qry=Hadoop:service=NameNode,name=FSNamesystemState . So please check the Deleted Host if the DataNode process is still running there? If yes then stop it. # ps -ef | grep DateNode
# echo `cat /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid`
# ps -ef | grep `cat /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid`
######## If the PID is live and running then kill it. ######
# kill -9 `cat /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid`
.
... View more
07-06-2017
03:03 AM
@Zhenwei Liu
Wonderful. Good to know that your issue is resolved by disabling "Snoopy" package. It will be great if you can mark this answer as "Accepted" so that it will be useful for the community users to quickly find the correct answer when they encounter the same crash report.
... View more
07-05-2017
07:46 PM
@Sami Ahmad
In your code i see that you are using port 10001 Is that a typo mistake? Connection con =DriverManager.getConnection("jdbc:hive2://hadoop2.tolls.dot.state.fl.us:10001/default","","");
... View more
07-05-2017
07:44 PM
@Vaibhav Kumar Another NOTE from HDP sandbox Installation Guide: https://hortonworks.com/tutorial/sandbox-deployment-and-install-guide/section/1/ Note: Make sure to allocate at least 8 GB (8192 MB) of RAM for the sandbox. .
... View more
07-05-2017
07:38 PM
Duplicate Thread Answered Here: https://community.hortonworks.com/questions/110307/please-help-compile.html#answer-110322
... View more
07-05-2017
07:34 PM
@Sami Ahmad As you are able to telnet "hadoop2" host on port 10000 hence you should alter your code to use the hostname as "hadoop2" Example: Connection con = DriverManager.getConnection("jdbc:hive2://hadoop2:10000/default", "", ""); . On Hive Server2 Host please run the following command to know what is it's actual Hostname (FQDN) # hostname -f
. If your client machine where you are running the Hive Client Java code then FQDN of HiveServer2 should be resolvable or else you will have to make changes in your "/etc/hosts" file to make the Hive FQDN resolvable from your client machine.
... View more