Support Questions
Find answers, ask questions, and share your expertise

error loading DBConnectionVerification.jar

Master Collaborator

I know I have a proxy issue some where but in order to find the issue I need to understand the flow. how does this jar file gets downloaded to a node from ambari server ? The log for testing the "hive" module from ambari console ,is not giving any detail and just that its couldn't download this jar file.

its some kind of localized proxy issue since other operations worked fine so far .

MYSQL connection error 

2017-06-16 21:27:10,803 - Error downloading DBConnectionVerification.jar from Ambari Server resources. Check network access to Ambari Server.
HTTP Error 504: Gateway Timeout
1 ACCEPTED SOLUTION

Super Mentor

@Sami Ahmad - If we try to understand what is happening behind the error:

2017-06-16 21:27:10,803 - Error downloading DBConnectionVerification.jar
 from Ambari Server resources. Check network access to Ambari Server.
HTTP Error 504: Gateway Timeout

- Ambari uses the following JAR "/var/lib/ambari-server/resources/DBConnectionVerification.jar" to perform various DB connectivity tests.

- This JAR need to be present in all the Ambari Agent Hosts. So Ambari Agents uses the following URL to download the mentioned JAR from ambari server host and then puts it inside their "/var/lib/ambari-agent/tmp/DBConnectionVerification.jar" location.

http://$AMBARI_SERVERHOST:8080/resources/DBConnectionVerification.jar

- Now if due to some reason the agents are not able to download this JAR from ambari server host over HTTP then the clients (agents) will fail to perform the DB connection check.

- In your case the Agents are not able to download this JAR from ambari server host because of the following error:

HTTP Error 504: Gateway Timeout

- This indicates that from the agent machines the following URL access is failing (using wget here just for demo, agents will use the python approach to download this jar):

wget http://$AMBARI_SERVERHOST:8080/resources/DBConnectionVerification.jar

What is the root cause: - Putting this JAR manually from Ambari Server host to Agents machines can make it work temporarily, but it will not fix the permanent issue. We should findout why the Agents are not able to download this jar from ambari server host. Agents should be able to access ambari server resources using HTTP.

.

View solution in original post

13 REPLIES 13

Master Collaborator

ambari server log on the other hand showing a very different error when I hit the "Test connection" button .

16 Jun 2017 21:35:22,149  WARN [ambari-action-scheduler] ExecutionCommandWrapper:185 - Unable to lookup the cluster byt ID; assuming that there is no cluster and therefore no configs for this execution command: Cluster not found, clusterName=clusterID=-1
16 Jun 2017 21:35:23,153  WARN [ambari-action-scheduler] ExecutionCommandWrapper:185 - Unable to lookup the cluster byt ID; assuming that there is no cluster and therefore no configs for this execution command: Cluster not found, clusterName=clusterID=-1
16 Jun 2017 21:35:24,156  WARN [ambari-action-scheduler] ExecutionCommandWrapper:185 - Unable to lookup the cluster byt ID; assuming that there is no cluster and therefore no configs for this execution command: Cluster not found, clusterName=clusterID=-1
16 Jun 2017 21:35:25,158  WARN [ambari-action-scheduler] ExecutionCommandWrapper:185 - Unable to lookup the cluster byt ID; assuming that there is no cluster and therefore no configs for this execution command: Cluster not found, clusterName=clusterID=-1
16 Jun 2017 21:35:26,161  WARN [ambari-action-scheduler] ExecutionCommandWrapper:185 - Unable to lookup the cluster byt ID; assuming that there is no cluster and therefore no configs for this execution command: Cluster not found, clusterName=clusterID=-1
16

Master Collaborator

looking at the similar issues online I have already checked that

1- my ntpd service is running on all nodes.

2- the ambari agent and ambari server of same version on all nodes

[root@hadoop1 ambari-server]# rpm -qa | grep ambari ambari-metrics-grafana-2.4.3.0-30.x86_64 ambari-agent-2.4.3.0-30.x86_64 ambari-server-2.4.3.0-30.x86_64

Expert Contributor

please try following , it may be help full

1) download the mysql connector jar and keep in this location /usr/share/java/mysql-connector-java.jar.

2) Mysql client should be installed on Ambari-server, if their only you installed the MYsql not required.

3) Verify MySQL connector working fine, like username and password, from the Ambari-server , please check as below.

Crosschecking the mysql connector working fine or not on ambari-server

[root@]#/usr/lib/hive/bin/schematool -initSchema -dbType mysql -userName hive -passWord hive
Metastore connection URL:jdbc:mysql://centos2.test.com/hive?createDatabaseIfNotExist true
Metastore Connection Driver :    com.mysql.jdbc.Driver
Metastore connection User:       hive
Starting metastore schema initialization to 0.13.0
Initialization script
hive-schema-0.13.0.mysql.sql
Initialization script completed
schemaTool completeted
[root@centos ~]#

Master Collaborator

my hive environment is on hadoop2 , the ambari server is on hadoop1, you want me to move mysqld to hadoop1 ? in that case schematool will fail on hadoop2 when run I think

I am not understanding I think , please clarify

Super Mentor

@Sami Ahmad - If we try to understand what is happening behind the error:

2017-06-16 21:27:10,803 - Error downloading DBConnectionVerification.jar
 from Ambari Server resources. Check network access to Ambari Server.
HTTP Error 504: Gateway Timeout

- Ambari uses the following JAR "/var/lib/ambari-server/resources/DBConnectionVerification.jar" to perform various DB connectivity tests.

- This JAR need to be present in all the Ambari Agent Hosts. So Ambari Agents uses the following URL to download the mentioned JAR from ambari server host and then puts it inside their "/var/lib/ambari-agent/tmp/DBConnectionVerification.jar" location.

http://$AMBARI_SERVERHOST:8080/resources/DBConnectionVerification.jar

- Now if due to some reason the agents are not able to download this JAR from ambari server host over HTTP then the clients (agents) will fail to perform the DB connection check.

- In your case the Agents are not able to download this JAR from ambari server host because of the following error:

HTTP Error 504: Gateway Timeout

- This indicates that from the agent machines the following URL access is failing (using wget here just for demo, agents will use the python approach to download this jar):

wget http://$AMBARI_SERVERHOST:8080/resources/DBConnectionVerification.jar

What is the root cause: - Putting this JAR manually from Ambari Server host to Agents machines can make it work temporarily, but it will not fix the permanent issue. We should findout why the Agents are not able to download this jar from ambari server host. Agents should be able to access ambari server resources using HTTP.

.

Super Mentor

@Sami Ahmad

Yum uses the proxy settings from "/etc/yum.conf" file by default. It might not be true for other utilities.

Also please check "~/.base_profile" and "~/.profile" scripts define the proxy settings so that it will be applicable globally.

export http_proxy=http://dotatofwproxy.tolls.dot.state.fl.us:8080

.

Another example, please try to validate if your mentioned proxy host and port is working correctly or not by using plain "wget" as following:

wget  -e use_proxy=yes -e http_proxy=http://dotatofwproxy.tolls.dot.state.fl.us:8080   http://$AMBARI_SERVERHOST:8080/resources/DBConnectionVerification.jar

Master Collaborator

hi Jay I found the issue I am facing but don't know why I am having issue this time and not the first time when I installed this cluster.

here is the issue : wget should not be using proxy server to download the

DBConnectionVerification.jar file because its local server , and for this I have defined "no_proxy" settings in the /etc/environment file as follows :

[root@hadoop1 ~]# cat /etc/environment
http_proxy="http://dotatofwproxy.tolls.dot.state.fl.us:8080/"
https_proxy="https://dotatofwproxy.tolls.dot.state.fl.us:8080/"
ftp_proxy="ftp://dotatofwproxy.tolls.dot.state.fl.us:8080/"
no_proxy=".tolls.dot.state.fl.us,.to.dot.state.fl.us,10.0.0.0/8"

so why does wgetrc not picking up the no_porxy settings ?

Master Collaborator

thinking back I think I might have installed the cluster first time from all local repositories , in that case I would not need proxy access right ?

this time I installed Ambari using local repository and HDP using public repository .

Super Mentor

@Sami Ahmad

Yes, you should set the `no_proxy' variable which should contain a comma-separated list of domain extensions proxy should _not_ be used for.

Also from ambari side also you should make sure to set the "-Dhttp.proxyHost" and "-Dhttp.proxyPort" and also for nonProxyHosts to prevent some host names from accessing the proxy server, define the list of excluded hosts, as follows:

-Dhttp.nonProxyHosts=<pipe|separated|list|of|hosts>

Please see: https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.0.0/bk_ambari-reference/content/ch_setting_up_a...

.

Master Collaborator

hi Jay

but as I have shown that the "no_proxy" variable is defined in my /etc/environment file but wget is not using it .

I also tried adding the no_proxy variable to the wgetrc file but didn't help.

[root@hadoop1 etc]# grep proxy /etc/wgetrc
#https_proxy = http://proxy.yoyodyne.com:18023/
#http_proxy = http://proxy.yoyodyne.com:18023/
#ftp_proxy = http://proxy.yoyodyne.com:18023/
https_proxy = http://dotatofwproxy.tolls.dot.state.fl.us:8080/
http_proxy = http://dotatofwproxy.tolls.dot.state.fl.us:8080/
no_proxy=".tolls.do.state.fl.us,hadoop1.tolls.dot.state.fl.us,hadoop1"

so how can I get wget to use the no_proxy variable?

Super Mentor

@Sami Ahmad

Try using the file name as "$USER_HOME/.wgetrc"

Wget initialization file can reside inside the following files:
1. "/usr/local/etc/wgetrc" (global, for all users) 
(OR) 
2. $HOME/.wgetrc (for a single user).
<strong></strong>

You can try defining the entry of no_proxy inside the "$HOME/.wgetrc" file. "no_proxy"to use string as the comma-separated list of domains to avoid in proxy loading, instead of the one specified in environment.

Please see:

[0] https://www.gnu.org/software/wget/manual/html_node/Sample-Wgetrc.html

[1] https://www.gnu.org/software/wget/manual/html_node/Wgetrc-Location.html

[2] https://www.gnu.org/software/wget/manual/html_node/Proxies.html

.

Master Collaborator

I put a .wgetrc file in root login folder but still wget is trying to access proxy.. its not taking the "no_proxy" variable setting.

[root@hadoop1 ~]# pwd
/root
[root@hadoop1 ~]# grep proxy .wgetrc
https_proxy = http://dotatofwproxy.tolls.dot.state.fl.us:8080/
http_proxy = http://dotatofwproxy.tolls.dot.state.fl.us:8080/
no_proxy=".tolls.do.state.fl.us,hadoop1.tolls.dot.state.fl.us,hadoop1"
# If you do not want to use proxy at all, set this to off.
#use_proxy = on

Master Collaborator

found the solution , on web all info about no_proxy settings for Centos is incorrect. what worked for me was removing the ~.wgetrc file and putting the following file in place .

issue is that wget is not taking the no_proxy settings from .wgetrc file, but if I define it on the system level it picks it up.

/etc/profile.d/proxy.sh

export http_proxy="http://dotatofwproxy.tolls.dot.state.fl.us:8080/"
export https_proxy="https://dotatofwproxy.tolls.dot.state.fl.us:8080/"
export ftp_proxy="ftp://dotatofwproxy.tolls.dot.state.fl.us:8080/"
export no_proxy=".tolls.dot.state.fl.us,hadoop1,hadoop2,hadoop3,hadoop4,hadoop5"
Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.