Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2442 | 04-27-2020 03:48 AM | |
4876 | 04-26-2020 06:18 PM | |
3975 | 04-26-2020 06:05 PM | |
3216 | 04-13-2020 08:53 PM | |
4920 | 03-31-2020 02:10 AM |
08-27-2019
08:10 PM
2 Kudos
@LeeFan Usually ambari makes use of the following script to run the alert check for the ats-hbase service. /var/lib/ambari-server/resources/stacks/HDP/3.0/services/YARN/package/alerts/alert_ats_hbase.py (on Ambari Server)
/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/YARN/package/alerts/alert_ats_hbase.py
(On Agent hosts) Above script relies on the following command to fetch the status. # su - yarn-ats -c "/usr/hdp/current/hadoop-yarn-client/bin/yarn app -status ats-hbase" However, in your attached screenshot where we see the alert .. shows the alert was executed 13 hours old alert. It might not be the current status of your ats-hbase. So can you please try to disable the alert and then enable it after 10 seconds to see if the old alert clears. Ambari UI --> Alerts (from left bottom panel) --> filter (icon) --> "Alert Definition Name" as "ATSv2 HBase Application" (click on Disable alertand then after 10 seconds to clear the old alert and then enable it back)
... View more
08-26-2019
10:57 PM
1 Kudo
@girish_khole It will be highly impossible to tell exactly what all changes are needed in your application when Kafka version changes from 1.0 to 2.0 (similarly Storm version changes from 1.1.0 to 1.2.0). As we do not know which kind of APIs your application is using. Kafka version changes are from 1.0 to 2.0 which is kind of version upgrade and with any such version upgrade it is possible that there may be few methods removed which were declared as deprecated earlier... or some methods signatures might have changes.. Similarly some new classes and methods are introduced. So better to go step by step try upgrading the pom.xml dependencies inside your JBoss deployed application and then check if it works ... if you see any NoSuchMethodError/ ClasssNotFountError/ ..etc then accordingly you will need to fix it by referring to the new APIs available as part of the upgraded components. If your question is answered then, Please make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
08-26-2019
09:50 PM
@girish_khole For example if you are currently using HDP 2.6.5 Kafka client libraries inside JBoss deployed application then you will find that the versions of kafka gets changes when you upgrade to HDP 3 For example: 1. In HDP 2.6.5 you will find the Kafka Version is "Apache Kafka 1.0.0" (Apache Storm 1.1.0) https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.5/bk_release-notes/content/comp_versions.html 2. But on the other hand in HDP 3.1 it is "Apache Kafka 2.0.0" (Apache Storm 1.2.1) https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.1.0/release-notes/content/comp_versions.html So based on the changes introduced in Kafka/storm versions you might need to change your application a bit to make use of the upgraded kafka version and to rebuild your JBoss deployed application to use the latest binaries/jars.
... View more
08-26-2019
06:34 PM
@rvillanueva HDF and HDP versions can be different in a cluster. They need not to be exactly same. For example please refer to the https://supportmatrix.hortonworks.com/ Click on "HDP 3.1" (or click on desired HDF version like HDF 3.4.1.1) and then you will find the compatibility matrix with Ambari + HDF versions.
... View more
08-26-2019
02:53 PM
@dtan 1. What happens when you hit the URL using command line tools like wget/curl? Please run the following kind of command from the host where you are opening the browser. # curl -iLv http://$NIFI_HOST:9090/nifi/ Can you please share the output of the above commands? (this is to isolate the browser issue like any browser lever network proxy setting issue) 2. Also please try to run the same CURL command from any of the cluster node to see if it returns proper HTML page with Nifi UI html? 3. What happens when you try to check the NiFI Host / Port access from the machine where you are trying to open the NiFI UI in browser? (this is to isolate the Firewall/Iptables issue) . # telnet $NIFI_HOST 9090
(OR)
# nc -v $NIFI_HOST 9090 4. On the NiFi host can you please confirm if the NiFi process is actually running and the port is listening? # ps -ef | grep org.apache.nifi.NiFi
# netstat -tnlpa | grep 9090
# cat /var/run/nifi/nifi.status 5. Do we have firewall/iptables disabled on NiFI host? (just to make sure that from outside of that host we can access nifi ports. 6. Can you please share the screenshot / output of the NiFi UI in the Browser Debugger console ... so that we can see if shows any error in browser debugger console?
... View more
08-26-2019
05:42 AM
@pritam_konar Are you still facing the issue? Please let us know... If your question is answered then, Please make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
08-24-2019
02:07 PM
2 Kudos
@shashank_naresh The following error can occur if the NameNode is not running fine. Highlighted Error: Call From sandbox-hdp.hortonworks.com/172.18.0.3 to sandbox-hdp.hortonworks.com:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: <a href="http://wiki.apache.org/hadoop/ConnectionRefused" target="_blank">http://wiki.apache.org/hadoop/ConnectionRefused</a> Possible cause: 1). Please verify if the port 8020 is actually listening? If not then we will need to check the NameNode log for any errors. We might need to also check the NameNode GC log to see if it has sufficient memory and if the GC is happening properly or not? # netstat -tnlpa | grep 8020
# netcat -tnlpa | grep 50070 2). There might be some errors listed in NameNode logs that can be found at. So can you please check and share the log file here? # ls -l /var/log/hadoop/hdfs/hadoop-hdfs-namenode-sandbox-hdp.hortonworks.com.log
# ls -lart /var/log/hadoop/hdfs/gc.log-201908* Specially in case of Sandbox Environment: As Sandbox is a single node cluster for testing/learning purpose which has a lots of services running on a single host so some times we might see that due to heavy load of other running services other services like NameNode ..etc does not function properly. So please try this: Stop the services that you are not needing currently in your Sandbox. Put those services in maintenance mode form Ambari UI and then just start the services that you are testing currently. This should free some memory on the sandbox host and back ground processing load by those services and that should improve the scenario.
... View more
08-22-2019
04:36 PM
1 Kudo
@maxolasersquad One very basic test we can do to verify if the "ambari-server setup" was performed or not is to look for JDBC settings. For example when we simply install ambari server binary on a host then the ambari.properties file will not have any JDBC configs so the output for the following will be empty. Example: (no output on a server where ambari-server setup was not executed) # grep 'jdbc' /etc/ambari-server/conf/ambari.properties However, on a server where setup was executed you will see atleast some jdbc settings as following: # grep 'jdbc' /etc/ambari-server/conf/ambari.properties
custom.mysql.jdbc.name=mysql-connector-java.jar
custom.oracle.jdbc.name=ojdbc8.jar
previous.custom.mysql.jdbc.name=mysql-jdbc-driver.jar
server.jdbc.connection-pool=internal
server.jdbc.database=postgres
server.jdbc.database_name=ambari
server.jdbc.postgres.schema=ambari
server.jdbc.user.name=ambari
server.jdbc.user.passwd=${alias=ambari.db.password} .
... View more
08-22-2019
04:36 PM
@pritam_konar Please make sure that you have a valid kerberos ticket before running a hdfs command. You can get a valid kerberos ticket as following: 1). Get the principal name from the keytab: Example: # klist -kte /etc/security/keytabs/hdfs.headless.keytab
Keytab name: FILE:/etc/security/keytabs/hdfs.headless.keytab
KVNO Timestamp Principal
---- ------------------- ------------------------------------------------------
2 08/11/2019 01:58:27 hdfs-ker1latest@EXAMPLE.COM (des-cbc-md5)
2 08/11/2019 01:58:27 hdfs-ker1latest@EXAMPLE.COM (aes256-cts-hmac-sha1-96)
2 08/11/2019 01:58:27 hdfs-ker1latest@EXAMPLE.COM (des3-cbc-sha1)
2 08/11/2019 01:58:27 hdfs-ker1latest@EXAMPLE.COM (arcfour-hmac)
2 08/11/2019 01:58:27 hdfs-ker1latest@EXAMPLE.COM (aes128-cts-hmac-sha1-96) 2). Get a valid kerberos ticke t as following. Please not that in the following command your Principal name might be different based on your cluster. So please change the principal name according to the output that you received from above command. # kinit -kt /etc/security/keytabs/hdfs.headless.keytab hdfs-ker1latest@EXAMPLE.COM
# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: hdfs-ker1latest@EXAMPLE.COM
Valid starting Expires Service principal
08/22/2019 22:47:43 08/23/2019 22:47:43 krbtgt/EXAMPLE.COM@EXAMPLE.COM 3). Now try to run the same HDFS command. This time you should be able to run those commands successfully. # hadoop fs -ls / *NOTE:* In the above case we are using "/etc/security/keytabs/hdfs.headless.keytab" in your case you can have your own a valid keytab that allows you to interact with HDFS then you should use that one. For testing you can use the hdfs.headless.keytab.
... View more
08-22-2019
02:07 AM
@kiranps11 Looks like you are using Mysql-connector-java.jar of version 8. Which has some changes in the driver class name as per: https://dev.mysql.com/doc/connector-j/8.0/en/connector-j-api-changes.html The name of the class that implements java.sql.Driver in MySQL Connector/J has changed from com.mysql.jdbc.Driver to com.mysql.cj.jdbc.Driver. The old class name has been deprecated. So can you please try any of the following approach and see if it works? Option-1). Either use 5.7 mysql-connector-java.jar with JDBC driver class name as "com.mysql.jdbc.Driver" (OR) Option-2). With mysql-connector-java.jar (of version 😎 please try using the new driver class name "com.mysql.cj.jdbc.Driver"
... View more