Support Questions

Find answers, ask questions, and share your expertise

Having issue while setting up the Cluster using Ambari

avatar
Contributor

Hi all

I am setting up Cluster using the Ambari , one of my node failing for "mysql-connector-java" installation .

I am using locally setup repo for this purpose . While searching for repo data ,i dont find "epodata/e743ed2f249f76a3c4b3ac75c8ee3c4fb28a61a2-primary.sqlite.bz2" in repo . Even i do dont find this at only repo .

================ Error details =============

File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 150, in _call_wrapper result = _call(command, **kwargs_copy) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call raise ExecutionFailed(err_msg, code, out, err) resource_management.core.exceptions.ExecutionFailed:

Execution of '/usr/bin/yum -d 0 -e 0 -y install mysql-connector-java' returned 1. Error: failure: repodata/e743ed2f249f76a3c4b3ac75c8ee3c4fb28a61a2-primary.sqlite.bz2 from Delivery-Sysadm-nodist-nover-noarch: [Errno 256] No more mirrors to try. stdout:

1 ACCEPTED SOLUTION

avatar
Master Mentor

@Kishore Kumar



Good to know that after yum clean the previous issue related to "mysql-connector-java" installation is gone.

yum clean all


- The issue which you reported in your previous response "space issue for "Mertic Collector" is different from the actual issue that is reported to this thread. So it is better to look into this new issue as part of a new HCC thread.

View solution in original post

4 REPLIES 4

avatar
Master Mentor

@Kishore Kumar

Are you able to manually run the following command on the problematic host? Just to confirm that you do not have any repo unavailability issue.

# yum install mysql-connector-java

.

Can you try performing yum cache cleanup? and then try again?

# yum clean dbcache metadata
OR
# yum clean all

.

I am able to find the "mysql-connector-java" package in the following repos:

# cat /etc/redhat-release 
CentOS Linux release 7.0.1406 (Core) 

# yum whatprovides mysql-connector-java
Loading mirror speeds from cached hostfile
mysql-connector-java-5.1.29-1.noarch : MySQL Connector/J - JDBC driver for MySQL
Repo        : HDP-UTILS-1.1.0.21

mysql-connector-java-5.1.37-1.noarch : MySQL Connector/J - JDBC driver for MySQL
Repo        : HDP-UTILS-1.1.0.21

1:mysql-connector-java-5.1.25-3.el7.noarch : Official JDBC driver for MySQL
Repo        : base

1:mysql-connector-java-5.1.25-3.el7.noarch : Official JDBC driver for MySQL
Repo        : @base

.

avatar
Expert Contributor

@Kishore Kumar

Before installing, you check the below activities,it may help in resolving your issue,

1. Check the network connectivity with your local Repo server.

2. Check whether your repo files are present under /etc/yum.repos.d directory.

3. Remove old yum cache from system by running following commands.

# rm -fr /var/cache/yum/* 
# yum clean all 

4. Check if you can list the valid repositories.

# yum repolist 

5. Check the package availability through yum.

# yum list | grep mysql-connector-java 

avatar
Contributor

Hi Team ,

It works after i doing "yum clean all" . Now it leads new issue.

I have 4 node in my cluster , before this error 3 was done with some warning .

Now when i did the "Retry" to fix this my-sql issue , it try to reinstall all 4 .

With that one node ran out of Disk space for "/usr" . I have a constraint off extending the space that mount .

Not sure why , "retry" is coping the same files again , instead use the older one and copy the new files only .

Can you please help me , how to handle this for the node , which is giving space issue for "Mertic Collector" ?

@Rajendra Manjunath

Best regards

~Kishore

avatar
Master Mentor

@Kishore Kumar



Good to know that after yum clean the previous issue related to "mysql-connector-java" installation is gone.

yum clean all


- The issue which you reported in your previous response "space issue for "Mertic Collector" is different from the actual issue that is reported to this thread. So it is better to look into this new issue as part of a new HCC thread.