Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2829 | 04-27-2020 03:48 AM | |
| 5502 | 04-26-2020 06:18 PM | |
| 4682 | 04-26-2020 06:05 PM | |
| 3712 | 04-13-2020 08:53 PM | |
| 5619 | 03-31-2020 02:10 AM |
08-27-2019
08:10 PM
2 Kudos
@LeeFan Usually ambari makes use of the following script to run the alert check for the ats-hbase service. /var/lib/ambari-server/resources/stacks/HDP/3.0/services/YARN/package/alerts/alert_ats_hbase.py (on Ambari Server)
/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/YARN/package/alerts/alert_ats_hbase.py
(On Agent hosts) Above script relies on the following command to fetch the status. # su - yarn-ats -c "/usr/hdp/current/hadoop-yarn-client/bin/yarn app -status ats-hbase" However, in your attached screenshot where we see the alert .. shows the alert was executed 13 hours old alert. It might not be the current status of your ats-hbase. So can you please try to disable the alert and then enable it after 10 seconds to see if the old alert clears. Ambari UI --> Alerts (from left bottom panel) --> filter (icon) --> "Alert Definition Name" as "ATSv2 HBase Application" (click on Disable alertand then after 10 seconds to clear the old alert and then enable it back)
... View more
08-26-2019
10:57 PM
1 Kudo
@girish_khole It will be highly impossible to tell exactly what all changes are needed in your application when Kafka version changes from 1.0 to 2.0 (similarly Storm version changes from 1.1.0 to 1.2.0). As we do not know which kind of APIs your application is using. Kafka version changes are from 1.0 to 2.0 which is kind of version upgrade and with any such version upgrade it is possible that there may be few methods removed which were declared as deprecated earlier... or some methods signatures might have changes.. Similarly some new classes and methods are introduced. So better to go step by step try upgrading the pom.xml dependencies inside your JBoss deployed application and then check if it works ... if you see any NoSuchMethodError/ ClasssNotFountError/ ..etc then accordingly you will need to fix it by referring to the new APIs available as part of the upgraded components. If your question is answered then, Please make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
08-26-2019
09:50 PM
@girish_khole For example if you are currently using HDP 2.6.5 Kafka client libraries inside JBoss deployed application then you will find that the versions of kafka gets changes when you upgrade to HDP 3 For example: 1. In HDP 2.6.5 you will find the Kafka Version is "Apache Kafka 1.0.0" (Apache Storm 1.1.0) https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.5/bk_release-notes/content/comp_versions.html 2. But on the other hand in HDP 3.1 it is "Apache Kafka 2.0.0" (Apache Storm 1.2.1) https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.1.0/release-notes/content/comp_versions.html So based on the changes introduced in Kafka/storm versions you might need to change your application a bit to make use of the upgraded kafka version and to rebuild your JBoss deployed application to use the latest binaries/jars.
... View more
08-26-2019
06:34 PM
@rvillanueva HDF and HDP versions can be different in a cluster. They need not to be exactly same. For example please refer to the https://supportmatrix.hortonworks.com/ Click on "HDP 3.1" (or click on desired HDF version like HDF 3.4.1.1) and then you will find the compatibility matrix with Ambari + HDF versions.
... View more
08-22-2019
04:36 PM
1 Kudo
@maxolasersquad One very basic test we can do to verify if the "ambari-server setup" was performed or not is to look for JDBC settings. For example when we simply install ambari server binary on a host then the ambari.properties file will not have any JDBC configs so the output for the following will be empty. Example: (no output on a server where ambari-server setup was not executed) # grep 'jdbc' /etc/ambari-server/conf/ambari.properties However, on a server where setup was executed you will see atleast some jdbc settings as following: # grep 'jdbc' /etc/ambari-server/conf/ambari.properties
custom.mysql.jdbc.name=mysql-connector-java.jar
custom.oracle.jdbc.name=ojdbc8.jar
previous.custom.mysql.jdbc.name=mysql-jdbc-driver.jar
server.jdbc.connection-pool=internal
server.jdbc.database=postgres
server.jdbc.database_name=ambari
server.jdbc.postgres.schema=ambari
server.jdbc.user.name=ambari
server.jdbc.user.passwd=${alias=ambari.db.password} .
... View more
08-21-2019
11:56 PM
@Manoj690 Better approach will be to change the PORT in yarn config from 53 to something unused. https://community.cloudera.com/t5/Community-Articles/YARN-REGISTRY-DNS-Port-Conflict-Issue/ta-p/249117
... View more
08-21-2019
11:53 PM
@Manoj690 Or try changing the yarn service port to something else. # kill -9 636
... View more
08-21-2019
11:47 PM
@Manoj690 Regarding "how to kill the process" ? 1. Please find the PID of the process which is using port 53. # netstat -tnlpa | grep 53 2. Kill that PID using the following command: # kill -9 $PID You can find the value of $PID form point 1 command. If your question is answered then, Please make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
08-21-2019
11:44 PM
@Manoj690 Also please let us know how yarn DNS issue is related to HST Server issue? By any chance did you post the yarn dns error in a wrong thread?
... View more
08-21-2019
11:43 PM
@Manoj690 DNS will use port 53 which seems to be already in use. Please kill the process once which is using port 53 and then retry the operation. Opening TCP and UDP channels on /0.0.0.0 port 53
2019-08-22 12:03:09,775 ERROR dns.PrivilegedRegistryDNSStarter (PrivilegedRegistryDNSStarter.java:init(61)) - Error initializing Registry DNS
java.net.BindException: Problem binding to [gaian-lap386.com:53] java.net.BindException: Address already in use; For more details see: <a href="http://wiki.apache.org/hadoop/BindException" target="_blank">http://wiki.apache.org/hadoop/BindException</a>
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
... View more