Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Oozie conflicts with existing Tomcat installation

avatar
Expert Contributor

I have lost track of the number of unsuccessful attempts at the deployment through the console. All requisites, such as selinux disabled, THS, ntp sync, password less sync from master node (amari server) to data node, etc are fine.

  1. I once again got a Java Process warning, after hosts were successfully registered, on the masternode.

    Process Issues (1) The following process should not be running /usr/lib/jvm/jre/bin/java -classpath

  2. Warnings on namenode - end to end. No warning messages were shown
  3. Failures on the datanode, right from DataNode Install Help please!

    Thanks

  1. resource_management.core.exceptions.Fail: Execution of 'ambari-sudo.sh /usr/bin/hdp-select set all `ambari-python-wrap /usr/bin/hdp-select versions | grep ^2.4 | tail -1`' returned 1. Traceback (most recent call last):
      File "/usr/bin/hdp-select", line 375, in <module>
        setPackages(pkgs, args[2], options.rpm_mode)
      File "/usr/bin/hdp-select", line 268, in setPackages
        os.symlink(target + "/" + leaves[pkg], linkname)
    OSError: [Errno 17] File exists
1 ACCEPTED SOLUTION

avatar
Expert Contributor

This is resolved.

Possible Cause

The main problem was the oozie not finding "/etc/tomcat/conf/ssl/server.xml". The oozie server has it own app-server; it should not therefore refer / conflict with the tomcat app server, which have deployed for our own purpose.

setting CATALINA_BASE=${CATALINA_BASE:-/usr/hdp/current/oozie-server/oozie-server}
setting CATALINA_TMPDIR=${CATALINA_TMPDIR:-/var/tmp/oozie}
setting OOZIE_CATALINA_HOME=/usr/lib/bigtop-tomcat

It did however refer to /etc/tomcat. We had configurations settings at .bashrc, /etc/profile and /etc/init.d/tomcat re-Catalina Base and Catalina_Home.

The oozie-setup.sh has references to Catalina_Base in many places. This may be the reason why it was referring to the wrong path.

Solution:

Code walk through on the shell files of oozie and other services, which did not start.

Commented references to Catalina_Home and Catalina_Base in /etc/profile and etc/init/d/tomcat.

Impact:

All hadoop services have started

Caution

Users who may want to run Tomcat app server on the same server as Hadoop could create conflict if configurations for tomcat app server is set in the /etc/profile and etc/init.d/tomcat.

The app server may either need to be run on a separate server than on the same server as oozie or enable user specific permission only through .bashrc.

View solution in original post

13 REPLIES 13

avatar

On that host, try running,

hdp-select versions
hdp-select status

Make sure that /usr/hdp/ dir only contains the version folder and current
List all symlinks in /usr/hdp/current and make sure they point to the correct location. Note: if some packages are not installed, it is expected to have dead symlinks.

avatar
Expert Contributor

Thanks Alejandro Fernandez

namenode:

hdp-select-version 2.4.0.0-169

hdp-select status:

accumulo-client - 2.4.0.0-169
accumulo-gc - 2.4.0.0-169
accumulo-master - 2.4.0.0-169
accumulo-monitor - 2.4.0.0-169
accumulo-tablet - 2.4.0.0-169
accumulo-tracer - 2.4.0.0-169
atlas-server - 2.4.0.0-169
falcon-client - None
falcon-server - None
flume-server - None
hadoop-client - 2.4.0.0-169
hadoop-hdfs-datanode - 2.4.0.0-169
hadoop-hdfs-journalnode - 2.4.0.0-169
hadoop-hdfs-namenode - 2.4.0.0-169
hadoop-hdfs-nfs3 - 2.4.0.0-169
hadoop-hdfs-portmap - 2.4.0.0-169
hadoop-hdfs-secondarynamenode - 2.4.0.0-169
hadoop-httpfs - None
hadoop-mapreduce-historyserver - 2.4.0.0-169
hadoop-yarn-nodemanager - 2.4.0.0-169
hadoop-yarn-resourcemanager - 2.4.0.0-169
hadoop-yarn-timelineserver - 2.4.0.0-169
hbase-client - None
hbase-master - None
hbase-regionserver - None
hive-metastore - None
hive-server2 - None
hive-webhcat - None
kafka-broker - None
knox-server - None
mahout-client - None
oozie-client - None
oozie-server - None
phoenix-client - None
phoenix-server - None
ranger-admin - None
ranger-kms - None
ranger-usersync - None
slider-client - None
spark-client - 2.4.0.0-169
spark-historyserver - 2.4.0.0-169
spark-thriftserver - 2.4.0.0-169
sqoop-client - None
sqoop-server - None
storm-client - None
storm-nimbus - None
storm-slider-client - None
storm-supervisor - None
zeppelin-server - None
zookeeper-client - 2.4.0.0-169
zookeeper-server - 2.4.0.0-169



/usr/hdp folder in namenode

2.4.0.0-169  current

/usr/hdp/current

  accumulo-client -> /usr/hdp/2.4.0.0-169/accumulo
accumulo-gc -> /usr/hdp/2.4.0.0-169/accumulo
accumulo-master -> /usr/hdp/2.4.0.0-169/accumulo
accumulo-monitor -> /usr/hdp/2.4.0.0-169/accumulo
accumulo-tablet -> /usr/hdp/2.4.0.0-169/accumulo
accumulo-tracer -> /usr/hdp/2.4.0.0-169/accumulo
 atlas-server -> /usr/hdp/2.4.0.0-169/atlas
falcon-client -> /usr/hdp/2.4.0.0-169/falcon
falcon-server -> /usr/hdp/2.4.0.0-169/falcon
 flume-server -> /usr/hdp/2.4.0.0-169/flume
hadoop-client -> /usr/hdp/2.4.0.0-169/hadoop
hadoop-hdfs-client -> /usr/hdp/2.4.0.0-169/hadoop-hdfs
hadoop-hdfs-datanode -> /usr/hdp/2.4.0.0-169/hadoop-hdfs
hadoop-hdfs-journalnode -> /usr/hdp/2.4.0.0-169/hadoop-hdfs
hadoop-hdfs-namenode -> /usr/hdp/2.4.0.0-169/hadoop-hdfs
hadoop-hdfs-nfs3 -> /usr/hdp/2.4.0.0-169/hadoop-hdfs
hadoop-hdfs-portmap -> /usr/hdp/2.4.0.0-169/hadoop-hdfs
hadoop-hdfs-secondarynamenode -> /usr/hdp/2.4.0.0-169/hadoop-hdfs
hadoop-httpfs -> /usr/hdp/2.4.0.0-169/hadoop-httpfs
hadoop-mapreduce-client -> /usr/hdp/2.4.0.0-169/hadoop-mapreduce
hadoop-mapreduce-historyserver -> /usr/hdp/2.4.0.0-169/hadoop-mapreduce
hadoop-yarn-client -> /usr/hdp/2.4.0.0-169/hadoop-yarn
hadoop-yarn-nodemanager -> /usr/hdp/2.4.0.0-169/hadoop-yarn
hadoop-yarn-resourcemanager -> /usr/hdp/2.4.0.0-169/hadoop-yarn
hadoop-yarn-timelineserver -> /usr/hdp/2.4.0.0-169/hadoop-yarn
 hbase-client -> /usr/hdp/2.4.0.0-169/hbase
 hbase-master -> /usr/hdp/2.4.0.0-169/hbase
 hbase-regionserver -> /usr/hdp/2.4.0.0-169/hbase
hive-client -> /usr/hdp/2.4.0.0-169/hive
hive-metastore -> /usr/hdp/2.4.0.0-169/hive
hive-server2 -> /usr/hdp/2.4.0.0-169/hive
hive-webhcat -> /usr/hdp/2.4.0.0-169/hive-hcatalog
 kafka-broker -> /usr/hdp/2.4.0.0-169/kafka
knox-server -> /usr/hdp/2.4.0.0-169/knox
mahout-client -> /usr/hdp/2.4.0.0-169/mahout
 oozie-client -> /usr/hdp/2.4.0.0-169/oozie
 oozie-server -> /usr/hdp/2.4.0.0-169/oozie
phoenix-client -> /usr/hdp/2.4.0.0-169/phoenix
phoenix-server -> /usr/hdp/2.4.0.0-169/phoenix
pig-client -> /usr/hdp/2.4.0.0-169/pig
lranger-admin -> /usr/hdp/2.4.0.0-169/ranger-admin
 ranger-kms -> /usr/hdp/2.4.0.0-169/ranger-kms
ranger-usersync -> /usr/hdp/2.4.0.0-169/ranger-usersync
slider-client -> /usr/hdp/2.4.0.0-169/slider
 spark-client -> /usr/hdp/2.4.0.0-169/spark
 spark-historyserver -> /usr/hdp/2.4.0.0-169/spark
 spark-thriftserver -> /usr/hdp/2.4.0.0-169/spark
 sqoop-client -> /usr/hdp/2.4.0.0-169/sqoop
 sqoop-server -> /usr/hdp/2.4.0.0-169/sqoop
 storm-client -> /usr/hdp/2.4.0.0-169/storm
 storm-nimbus -> /usr/hdp/2.4.0.0-169/storm
storm-slider-client -> /usr/hdp/2.4.0.0-169/storm-slider-client
 storm-supervisor -> /usr/hdp/2.4.0.0-169/storm
tez-client -> /usr/hdp/2.4.0.0-169/tez
zeppelin-server -> /usr/hdp/2.4.0.0-169/zeppelin
zookeeper-client -> /usr/hdp/2.4.0.0-169/zookeeper
Zookeeper-server -> /usr/hdp/2.4.0.0-169/zookeeper

I guess I need to uninstall Ambari-server, agent and all files/folders/users, etc and try again.

Thanks for the help.

avatar
Expert Contributor

Sorry. The details on the data node are as follows:

/usr/hdp

2.4.0.0-169

current

/usr/hdp/current

  accumulo-client 
  falcon-client
  flume-server -> /usr/hdp/2.4.0.0-169/flume
   hadoop-client   ->   /usr/hdp/2.4.0.0-169/hadoop
   hadoop-hdfs-client   ->   /usr/hdp/2.4.0.0-169/hadoop-hdfs
   hadoop-hdfs-datanode   ->  /usr/hdp/2.4.0.0-169/hadoop-hdfs
   hadoop-hdfs-journalnode   ->   /usr/hdp/2.4.0.0-169/hadoop-hdfs
   hadoop-hdfs-namenode   ->   /usr/hdp/2.4.0.0-169/hadoop-hdfs
   hadoop-hdfs-nfs3   ->   /usr/hdp/2.4.0.0-169/hadoop-hdfs
   hadoop-hdfs-portmap   ->   /usr/hdp/2.4.0.0-169/hadoop-hdfs
   hadoop-hdfs-secondarynamenode   ->   /usr/hdp/2.4.0.0-169/hadoop-hdfs
   hadoop-httpfs   ->   /usr/hdp/2.4.0.0-169/hadoop-httpfs
   hadoop-mapreduce-client   ->  /usr/hdp/2.4.0.0-169/hadoop-mapreduce
   hadoop-mapreduce-historyserver   ->   /usr/hdp/2.4.0.0-169/hadoop-mapreduce
   hadoop-yarn-client   ->   /usr/hdp/2.4.0.0-169/hadoop-yarn
   hadoop-yarn-nodemanager    ->  /usr/hdp/2.4.0.0-169/hadoop-yarn
   hadoop-yarn-resourcemanager   ->   /usr/hdp/2.4.0.0-169/hadoop-yarn
   hadoop-yarn-timelineserver    ->   /usr/hdp/2.4.0.0-169/hadoop-yarn
   hbase-client   ->   /usr/hdp/2.4.0.0-169/hbase
   hbase-master   ->   /usr/hdp/2.4.0.0-169/hbase
   hbase-regionserver   ->   /usr/hdp/2.4.0.0-169/hbase
   hive-client   ->   /usr/hdp/2.4.0.0-169/hive
   phoenix-client   ->   /usr/hdp/2.4.0.0-169/phoenix
   phoenix-server   ->  /usr/hdp/2.4.0.0-169/phoenix
   pig-client   ->   /usr/hdp/2.4.0.0-169/pig
   spark-client   ->  /usr/hdp/2.4.0.0-169/spark
   tez-client   ->   /usr/hdp/2.4.0.0-169/tez
   zookeeper-client   ->   /usr/hdp/2.4.0.0-169/zookeeper
   zookeeper-server   ->   /usr/hdp/2.4.0.0-169/zookeeper
 

avatar
Expert Contributor

data node command line : sudo hdp-select status returned an error

Traceback (most recent call last): File "/bin/hdp-select", line 371, in <module> listPackages(getPackages("all")) File "/bin/hdp-select", line 214, in listPackages os.path.basename(os.path.dirname(os.readlink(linkname)))) OSError: [Errno 22] Invalid argument: '/usr/hdp/current/accumulo-client' [ec2-user@datanode ~]$ OSError: [Errno 22] Invalid argument: '/usr/hdp/current/accumulo-client'accumulo-client'

avatar
Expert Contributor

I cleaned, retried and failed again. I do not know the issues, which are causing this failure. I would really appreciate if some one could help me resolve and troubleshoot the problems, which I am facing for the last 3 weeks. Many thanks.

I cleaned out using the link below:

https://gist.github.com/nsabharwal/f57bb9e607114833df9b

Errors:

Master node: From Accumulo to zookeeper warnings

2016-03-14 21:09:15,402 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-03-14 21:09:15,403 - Group['spark'] {}
2016-03-14 21:09:15,404 - Group['hadoop'] {}
2016-03-14 21:09:15,404 - Group['users'] {}
2016-03-14 21:09:15,405 - Group['knox'] {}
2016-03-14 21:09:15,405 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-03-14 21:09:15,405 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-03-14 21:09:15,406 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2016-03-14 21:09:15,407 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-03-14 21:09:15,407 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-03-14 21:09:15,408 - User['falcon'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2016-03-14 21:09:15,409 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2016-03-14 21:09:15,409 - User['accumulo'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-03-14 21:09:15,410 - User['mahout'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-03-14 21:09:15,410 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-03-14 21:09:15,411 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2016-03-14 21:09:15,411 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-03-14 21:09:15,412 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-03-14 21:09:15,412 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-03-14 21:09:15,413 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-03-14 21:09:15,414 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-03-14 21:09:15,414 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-03-14 21:09:15,415 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-03-14 21:09:15,415 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-03-14 21:09:15,416 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-03-14 21:09:15,416 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-03-14 21:09:15,418 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2016-03-14 21:09:15,422 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2016-03-14 21:09:15,422 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'recursive': True, 'mode': 0775, 'cd_access': 'a'}
2016-03-14 21:09:15,423 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-03-14 21:09:15,423 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2016-03-14 21:09:15,427 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2016-03-14 21:09:15,428 - Group['hdfs'] {}
2016-03-14 21:09:15,428 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']}
2016-03-14 21:09:15,429 - Directory['/etc/hadoop'] {'mode': 0755}
2016-03-14 21:09:15,429 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0777}
2016-03-14 21:09:15,439 - Repository['HDP-2.4'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.4.0.0', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP', 'mirror_list': None}
2016-03-14 21:09:15,445 - File['/etc/yum.repos.d/HDP.repo'] {'content': '[HDP-2.4]\nname=HDP-2.4\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.4.0.0\n\npath=/\nenabled=1\ngpgcheck=0'}
2016-03-14 21:09:15,446 - Repository['HDP-UTILS-1.1.0.20'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.20/repos/centos7', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None}
2016-03-14 21:09:15,448 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': '[HDP-UTILS-1.1.0.20]\nname=HDP-UTILS-1.1.0.20\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.20/repos/centos7\n\npath=/\nenabled=1\ngpgcheck=0'}
2016-03-14 21:09:15,449 - Package['unzip'] {}
2016-03-14 21:09:15,585 - Skipping installation of existing package unzip
2016-03-14 21:09:15,585 - Package['curl'] {}
2016-03-14 21:09:15,638 - Skipping installation of existing package curl
2016-03-14 21:09:15,638 - Package['hdp-select'] {}
2016-03-14 21:09:15,690 - Skipping installation of existing package hdp-select
2016-03-14 21:09:15,828 - Package['accumulo_2_4_*'] {}
2016-03-14 21:09:15,966 - Skipping installation of existing package accumulo_2_4_*
2016-03-14 21:09:16,173 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-03-14 21:09:16,173 - Executing hdp-select set all on 2.4
2016-03-14 21:09:16,174 - Execute['ambari-sudo.sh /usr/bin/hdp-select set all `ambari-python-wrap /usr/bin/hdp-select versions | grep ^2.4 | tail -1`'] {'only_if': 'ls -d /usr/hdp/2.4*'}
2016-03-14 21:09:16,236 - Parameter hadoop_conf_dir is missing or directory does not exist. This is expected if this host does not have any Hadoop components.
2016-03-14 21:09:16,236 - Skipping /etc/ranger/kms/conf as it does not exist.
2016-03-14 21:09:16,236 - Skipping /etc/zookeeper/conf as it does not exist.
2016-03-14 21:09:16,237 - Skipping /etc/pig/conf as it does not exist.
2016-03-14 21:09:16,237 - Skipping /etc/tez/conf as it does not exist.
2016-03-14 21:09:16,237 - Skipping /etc/hive-webhcat/conf,/etc/hive-hcatalog/conf as it does not exist.
2016-03-14 21:09:16,237 - Skipping /etc/hbase/conf as it does not exist.
2016-03-14 21:09:16,237 - Skipping /etc/knox/conf as it does not exist.
2016-03-14 21:09:16,237 - Skipping /etc/ranger/usersync/conf as it does not exist.
2016-03-14 21:09:16,237 - Skipping /etc/hadoop/conf as it does not exist.
2016-03-14 21:09:16,237 - Skipping /etc/mahout/conf as it does not exist.
2016-03-14 21:09:16,237 - Skipping /etc/storm/conf as it does not exist.
2016-03-14 21:09:16,237 - Skipping /etc/ranger/admin/conf as it does not exist.
2016-03-14 21:09:16,237 - Skipping /etc/flume/conf as it does not exist.
2016-03-14 21:09:16,237 - Skipping /etc/sqoop/conf as it does not exist.
2016-03-14 21:09:16,237 - /etc/accumulo/conf is already link to /usr/hdp/2.4.0.0-169/accumulo/conf
2016-03-14 21:09:16,238 - Skipping /etc/phoenix/conf as it does not exist.
2016-03-14 21:09:16,238 - Skipping /etc/storm-slider-client/conf as it does not exist.
2016-03-14 21:09:16,238 - Skipping /etc/slider/conf as it does not exist.
2016-03-14 21:09:16,238 - Skipping /etc/oozie/conf as it does not exist.
2016-03-14 21:09:16,238 - Skipping /etc/falcon/conf as it does not exist.
2016-03-14 21:09:16,238 - Skipping /etc/spark/conf as it does not exist.
2016-03-14 21:09:16,238 - Skipping /etc/kafka/conf as it does not exist.
2016-03-14 21:09:16,238 - Skipping /etc/hive/conf as it does not exist.
                  

              

          

      
    

  



    
  

  
    
      
        
        
        
        
          

Data Node: Accumulo cllient and Accumulo T server failed and subsequent failures/warnings. It failed after 5%

stderr:

2016-03-14 21:09:11,233 - Could not determine HDP version for component accumulo-client by calling '/usr/bin/hdp-select status accumulo-client > /tmp/tmpppbLJC'. Return Code: 1, Output: .
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/after-INSTALL/scripts/hook.py", line 37, in <module>
    AfterInstallHook().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/after-INSTALL/scripts/hook.py", line 31, in hook
    setup_hdp_symlinks()
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/after-INSTALL/scripts/shared_initialization.py", line 44, in setup_hdp_symlinks
    hdp_select.select_all(version)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/hdp_select.py", line 122, in select_all
    Execute(command, only_if = only_if_command)
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 158, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 121, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 238, in action_run
    tries=self.resource.tries, try_sleep=self.resource.try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner
    result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call
    tries=tries, try_sleep=try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call
    raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'ambari-sudo.sh /usr/bin/hdp-select set all `ambari-python-wrap /usr/bin/hdp-select versions | grep ^2.4 | tail -1`' returned 1. Traceback (most recent call last):
  File "/usr/bin/hdp-select", line 375, in <module>
    setPackages(pkgs, args[2], options.rpm_mode)
  File "/usr/bin/hdp-select", line 268, in setPackages
    os.symlink(target + "/" + leaves[pkg], linkname)
OSError: [Errno 17] File exists

Further errors:

Complete!
[ec2-user@datanode ~]$ sudo hdp-select versions
2.4.0.0-169
[ec2-user@datanode ~]$ sudo hdp-select status
Traceback (most recent call last):
  File "/bin/hdp-select", line 371, in <module>
    listPackages(getPackages("all"))
  File "/bin/hdp-select", line 214, in listPackages
    os.path.basename(os.path.dirname(os.readlink(linkname))))
OSError: [Errno 22] Invalid argument: '/usr/hdp/current/accumulo-client'
[ec2-user@datanode ~]$


Many Thanks

avatar
Expert Contributor

As in the previous post, I wonder if /usr/hdp/accumulo-client/falcon-client not pointing to the location is causing this to break?

avatar
@S Srinivasa

Try and run the hostcleanup to clean away your host. This cleans up any old versions etc.

https://cwiki.apache.org/confluence/display/AMBARI/Host+Cleanup+for+Ambari+and+Stack

See if this helps. This has come to my rescue in the past.

avatar
Expert Contributor

@Shivaji

Thanks. I assume that needs to be done in all nodes. I saw that link initially but looks a little tricky to my liking and hence went down nsabharwal's blog, which looks easy and comfortable.

avatar
Expert Contributor

I am little challenged as I am unable to find the following files/folders. Could some one point me in the right direction please? thanks

I cannot find them anywhere in the system.

1: sudo locate hdp.repo
/hdp.repo
/etc/yum.repos.d/hdp.repo
[ec2-user@namenode yum.repos.d]$

2: sudo yum list installed | grep HDP

                                               @HDP-2.3
                                               @HDP-2.3
                                               @HDP-2.4
                                               @HDP-2.4
                                               @HDP-2.3
                                               @HDP-2.4
bigtop-tomcat.noarch       6.0.44-1.el6        @HDP-2.3
epel-release.noarch        6-8                 @HDP-UTILS-1.1.0.20
                                               @HDP-2.3
hdp-select.noarch          2.4.0.0-169.el6     @HDP-2.4
                                               @HDP-2.3
                                               @HDP-2.3
                                               @HDP-2.3
                                               @HDP-2.3
                                               @HDP-2.3
                                               @HDP-2.3
                                               @HDP-2.3
                                               @HDP-2.4
                                               @HDP-2.4
                                               @HDP-2.3
                                               @HDP-2.4
                                               @HDP-2.3

3: sudo yum list installed | grep HDP-UTILS

   epel-release.noarch        6-8                 @HDP-UTILS-1.1.0.20


4: sudo yum repolist does not return anything related to HDP, Ambari, HDP-UTILS