Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Error while adding service

Solved Go to solution
Highlighted

Error while adding service

Contributor

Hi all of you :) I had hdp version 2.4. and I upgraded it on 2.5. On version 2.4. of hdp I had most of services which hdp provides but I removed them and then made an upgrade. After that when I tryed to install some of services on node where hdp is installed I got this kidn of error: stderr:

Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/after-INSTALL/scripts/hook.py", line 37, in <module>
    AfterInstallHook().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/after-INSTALL/scripts/hook.py", line 31, in hook
    setup_stack_symlinks()
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/after-INSTALL/scripts/shared_initialization.py", line 49, in setup_stack_symlinks
    stack_select.select_all(version)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/stack_select.py", line 132, in select_all
    Execute(command, only_if = only_if_command)
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 273, in action_run
    tries=self.resource.tries, try_sleep=self.resource.try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner
    result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call
    tries=tries, try_sleep=try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 293, in _call
    raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'ambari-sudo.sh /usr/bin/hdp-select set all `ambari-python-wrap /usr/bin/hdp-select versions | grep ^2.5.3.0-37 | tail -1`' returned 1. Traceback (most recent call last):
  File "/usr/bin/hdp-select", line 391, in <module>
    setPackages(pkgs, args[2], options.rpm_mode)
  File "/usr/bin/hdp-select", line 290, in setPackages
    os.symlink(target + "/" + dir, linkname)
OSError: [Errno 17] File exists

Does some have an idea what is going wrong?

Also I deleted folder of 2.4 version onf hdp in my usr/hdp

When I try to do hdp-select set all 2.5.3.0.-37 i got this kind of an error:

Traceback (most recent call last):  File "/usr/bin/hdp-select", line 391, in <module>  setPackages(pkgs, args[2], options.rpm_mode)  File "/usr/bin/hdp-select", line 290, in setPackages  os.symlink(target + "/" + dir, linkname)OSError: [Errno 17] File exists

Also, this is my

hdp-select:

accumulo-client - None
accumulo-gc - None
accumulo-master - None
accumulo-monitor - None
accumulo-tablet - None
accumulo-tracer - None
atlas-client - None
atlas-server - None
falcon-client - None
falcon-server - None
flume-server - None
hadoop-client - None
hadoop-hdfs-datanode - None
hadoop-hdfs-journalnode - None
hadoop-hdfs-namenode - None
hadoop-hdfs-nfs3 - None
hadoop-hdfs-portmap - None
hadoop-hdfs-secondarynamenode - None
hadoop-hdfs-zkfc - None
hadoop-httpfs - None
hadoop-mapreduce-historyserver - None
hadoop-yarn-nodemanager - None
hadoop-yarn-resourcemanager - None
hadoop-yarn-timelineserver - None
hbase-client - None
hbase-master - None
hbase-regionserver - None
hive-metastore - None
hive-server2 - None
hive-server2-hive2 - None
hive-webhcat - None
kafka-broker - 2.5.3.0-37
knox-server - None
livy-server - None
mahout-client - None
oozie-client - None
oozie-server - None
phoenix-client - None
phoenix-server - None
ranger-admin - None
ranger-kms - None
ranger-tagsync - None
ranger-usersync - None
slider-client - None
spark-client - None
spark-historyserver - None
spark-thriftserver - None
Traceback (most recent call last):
  File "/usr/bin/hdp-select", line 387, in <module>
    listPackages(getPackages("all"))
  File "/usr/bin/hdp-select", line 220, in listPackages
    os.path.basename(os.path.dirname(os.readlink(linkname))))
OSError: [Errno 22] Invalid argument: '/usr/hdp/current/spark2-client'

and stdout:

2016-12-15 17:02:16,814 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-12-15 17:02:16,816 - Group['hadoop'] {}
2016-12-15 17:02:16,818 - Group['users'] {}
2016-12-15 17:02:16,819 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-12-15 17:02:16,821 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-12-15 17:02:16,824 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-12-15 17:02:16,825 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2016-12-15 17:02:16,827 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}

2016-12-15 17:02:16,830 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-12-15 17:02:16,831 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-12-15 17:02:16,835 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2016-12-15 17:02:16,842 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2016-12-15 17:02:16,843 - Group['hdfs'] {}
2016-12-15 17:02:16,843 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']}
2016-12-15 17:02:16,844 - FS Type: 
2016-12-15 17:02:16,845 - Directory['/etc/hadoop'] {'mode': 0755}
2016-12-15 17:02:16,845 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2016-12-15 17:02:16,997 - Initializing 2 repositories
2016-12-15 17:02:16,997 - Repository['HDP-2.5'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.3.0', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP', 'mirror_list': None}
2016-12-15 17:02:17,029 - File['/etc/yum.repos.d/HDP.repo'] {'content': '[HDP-2.5]\nname=HDP-2.5\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.3.0\n\npath=/\nenabled=1\ngpgcheck=0'}
2016-12-15 17:02:17,030 - Repository['HDP-UTILS-1.1.0.21'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None}
2016-12-15 17:02:17,056 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': '[HDP-UTILS-1.1.0.21]\nname=HDP-UTILS-1.1.0.21\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7\n\npath=/\nenabled=1\ngpgcheck=0'}
2016-12-15 17:02:17,057 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2016-12-15 17:02:17,296 - Skipping installation of existing package unzip
2016-12-15 17:02:17,296 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2016-12-15 17:02:17,311 - Skipping installation of existing package curl
2016-12-15 17:02:17,312 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2016-12-15 17:02:17,333 - Skipping installation of existing package hdp-select
2016-12-15 17:02:17,850 - Version 2.5.3.0-37 was provided as effective cluster version.  Using package version 2_5_3_0_37
2016-12-15 17:02:17,851 - Package['hadoop_2_5_3_0_37-yarn'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2016-12-15 17:02:17,970 - Skipping installation of existing package hadoop_2_5_3_0_37-yarn
2016-12-15 17:02:17,971 - Version 2.5.3.0-37 was provided as effective cluster version.  Using package version 2_5_3_0_37
2016-12-15 17:02:17,972 - Package['hadoop_2_5_3_0_37-mapreduce'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2016-12-15 17:02:17,985 - Skipping installation of existing package hadoop_2_5_3_0_37-mapreduce
2016-12-15 17:02:17,985 - Version 2.5.3.0-37 was provided as effective cluster version.  Using package version 2_5_3_0_37
2016-12-15 17:02:17,987 - Package['hadoop_2_5_3_0_37-hdfs'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2016-12-15 17:02:17,999 - Skipping installation of existing package hadoop_2_5_3_0_37-hdfs
2016-12-15 17:02:18,308 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-12-15 17:02:18,309 - Executing hdp-select set all on 2.5.3.0-37
2016-12-15 17:02:18,310 - Execute['ambari-sudo.sh /usr/bin/hdp-select set all `ambari-python-wrap /usr/bin/hdp-select versions | grep ^2.5.3.0-37 | tail -1`'] {'only_if': 'ls -d /usr/hdp/2.5.3.0-37*'}

Command failed after 1 tries

1 ACCEPTED SOLUTION

Accepted Solutions
Highlighted

Re: Error while adding service

@Ivan Majnaric

Sometimes this happens if there are some unwanted (extra directories) present inside the "/usr/hdp/current". Can you try removing any extra directory from this "/usr/hdp/current".

If you notice the failure is happening for any particular service then you should delete the symlink for that service/component from "/usr/hdp/current" directory and then try reinstalling the service.

View solution in original post

6 REPLIES 6
Highlighted

Re: Error while adding service

Contributor
StackLink

Also this is my topic on stackoverflow for more info if you need

Highlighted

Re: Error while adding service

@Ivan Majnaric

Sometimes this happens if there are some unwanted (extra directories) present inside the "/usr/hdp/current". Can you try removing any extra directory from this "/usr/hdp/current".

If you notice the failure is happening for any particular service then you should delete the symlink for that service/component from "/usr/hdp/current" directory and then try reinstalling the service.

View solution in original post

Highlighted

Re: Error while adding service

Contributor

Hmm, and what do you think by these, which of them are extra dir?

Re: Error while adding service

Super Collaborator

Do an ls -l /usr/hdp/current and tell us what the output is.

Is this a duplicate post of https://community.hortonworks.com/questions/71823/error-with-installing-any-service-oserror-errrno-1...?

Highlighted

Re: Error while adding service

Contributor

This is an output of ls -l

lrwxrwxrwx.  1 root root   28 Dec 15 18:00 accumulo-client -> /usr/hdp/2.5.3.0-37/accumulo
lrwxrwxrwx.  1 root root   28 Dec 15 18:00 accumulo-gc -> /usr/hdp/2.5.3.0-37/accumulo
lrwxrwxrwx.  1 root root   28 Dec 15 18:00 accumulo-master -> /usr/hdp/2.5.3.0-37/accumulo
lrwxrwxrwx.  1 root root   28 Dec 15 18:00 accumulo-monitor -> /usr/hdp/2.5.3.0-37/accumulo
lrwxrwxrwx.  1 root root   28 Dec 15 18:00 accumulo-tablet -> /usr/hdp/2.5.3.0-37/accumulo
lrwxrwxrwx.  1 root root   28 Dec 15 18:00 accumulo-tracer -> /usr/hdp/2.5.3.0-37/accumulo
lrwxrwxrwx.  1 root root   25 Dec 15 18:00 atlas-client -> /usr/hdp/2.5.3.0-37/atlas
lrwxrwxrwx.  1 root root   25 Dec 15 18:00 atlas-server -> /usr/hdp/2.5.3.0-37/atlas
lrwxrwxrwx.  1 root root   26 Dec 15 18:00 falcon-client -> /usr/hdp/2.5.3.0-37/falcon
lrwxrwxrwx.  1 root root   26 Dec 15 18:00 falcon-server -> /usr/hdp/2.5.3.0-37/falcon
lrwxrwxrwx.  1 root root   25 Dec 15 18:00 flume-server -> /usr/hdp/2.5.3.0-37/flume
lrwxrwxrwx.  1 root root   26 Dec 15 18:00 hadoop-client -> /usr/hdp/2.5.3.0-37/hadoop
lrwxrwxrwx.  1 root root   31 Dec 15 18:00 hadoop-hdfs-client -> /usr/hdp/2.5.3.0-37/hadoop-hdfs
lrwxrwxrwx.  1 root root   32 Dec 14 10:50 hadoop-hdfs-datanode -> /usr/hdp/2.4.2.0-258/hadoop-hdfs
lrwxrwxrwx.  1 root root   32 Dec 14 10:50 hadoop-hdfs-journalnode -> /usr/hdp/2.4.2.0-258/hadoop-hdfs
lrwxrwxrwx.  1 root root   32 Dec 14 10:50 hadoop-hdfs-namenode -> /usr/hdp/2.4.2.0-258/hadoop-hdfs
lrwxrwxrwx.  1 root root   32 Dec 14 10:50 hadoop-hdfs-nfs3 -> /usr/hdp/2.4.2.0-258/hadoop-hdfs
lrwxrwxrwx.  1 root root   32 Dec 14 10:50 hadoop-hdfs-portmap -> /usr/hdp/2.4.2.0-258/hadoop-hdfs
lrwxrwxrwx.  1 root root   32 Dec 14 10:50 hadoop-hdfs-secondarynamenode -> /usr/hdp/2.4.2.0-258/hadoop-hdfs
lrwxrwxrwx.  1 root root   33 Dec 15 18:00 hadoop-httpfs -> /usr/hdp/2.5.3.0-37/hadoop-httpfs
lrwxrwxrwx.  1 root root   36 Dec 15 18:00 hadoop-mapreduce-client -> /usr/hdp/2.5.3.0-37/hadoop-mapreduce
lrwxrwxrwx.  1 root root   37 Dec 14 10:50 hadoop-mapreduce-historyserver -> /usr/hdp/2.4.2.0-258/hadoop-mapreduce
lrwxrwxrwx.  1 root root   31 Dec 15 18:00 hadoop-yarn-client -> /usr/hdp/2.5.3.0-37/hadoop-yarn
lrwxrwxrwx.  1 root root   32 Dec 14 10:50 hadoop-yarn-nodemanager -> /usr/hdp/2.4.2.0-258/hadoop-yarn
lrwxrwxrwx.  1 root root   32 Dec 14 10:50 hadoop-yarn-resourcemanager -> /usr/hdp/2.4.2.0-258/hadoop-yarn
lrwxrwxrwx.  1 root root   32 Dec 14 10:50 hadoop-yarn-timelineserver -> /usr/hdp/2.4.2.0-258/hadoop-yarn
lrwxrwxrwx.  1 root root   26 Dec 14 10:50 hbase-client -> /usr/hdp/2.4.2.0-258/hbase
lrwxrwxrwx.  1 root root   26 Dec 14 10:50 hbase-master -> /usr/hdp/2.4.2.0-258/hbase
lrwxrwxrwx.  1 root root   26 Dec 14 10:50 hbase-regionserver -> /usr/hdp/2.4.2.0-258/hbase
lrwxrwxrwx.  1 root root   24 Dec 15 18:00 hive-client -> /usr/hdp/2.5.3.0-37/hive
lrwxrwxrwx.  1 root root   25 Dec 14 10:50 hive-metastore -> /usr/hdp/2.4.2.0-258/hive
lrwxrwxrwx.  1 root root   25 Dec 14 10:50 hive-server2 -> /usr/hdp/2.4.2.0-258/hive
lrwxrwxrwx.  1 root root   25 Dec 14 12:12 hive-server2-hive2 -> /usr/hdp/2.5.3.0-37/hive2
lrwxrwxrwx.  1 root root   34 Dec 14 10:50 hive-webhcat -> /usr/hdp/2.4.2.0-258/hive-hcatalog
lrwxrwxrwx.  1 root root   25 Dec 15 14:06 kafka-broker -> /usr/hdp/2.5.3.0-37/kafka
lrwxrwxrwx.  1 root root   25 Dec 14 10:50 knox-server -> /usr/hdp/2.4.2.0-258/knox
lrwxrwxrwx.  1 root root   24 Dec 15 18:00 livy-client -> /usr/hdp/2.5.3.0-37/livy
lrwxrwxrwx.  1 root root   25 Dec 14 10:50 livy-server -> /usr/hdp/2.4.2.0-258/livy
lrwxrwxrwx.  1 root root   27 Dec 14 10:50 mahout-client -> /usr/hdp/2.4.2.0-258/mahout
lrwxrwxrwx.  1 root root   26 Dec 14 10:50 oozie-client -> /usr/hdp/2.4.2.0-258/oozie
lrwxrwxrwx.  1 root root   26 Dec 14 10:50 oozie-server -> /usr/hdp/2.4.2.0-258/oozie
lrwxrwxrwx.  1 root root   28 Dec 14 10:50 phoenix-client -> /usr/hdp/2.4.2.0-258/phoenix
lrwxrwxrwx.  1 root root   28 Dec 14 10:50 phoenix-server -> /usr/hdp/2.4.2.0-258/phoenix
lrwxrwxrwx.  1 root root   23 Dec 15 18:00 pig-client -> /usr/hdp/2.5.3.0-37/pig
lrwxrwxrwx.  1 root root   33 Dec 14 10:50 ranger-admin -> /usr/hdp/2.4.2.0-258/ranger-admin
lrwxrwxrwx.  1 root root   31 Dec 14 10:50 ranger-kms -> /usr/hdp/2.4.2.0-258/ranger-kms
lrwxrwxrwx.  1 root root   36 Dec 14 10:50 ranger-usersync -> /usr/hdp/2.4.2.0-258/ranger-usersync
lrwxrwxrwx.  1 root root   27 Dec 14 10:50 slider-client -> /usr/hdp/2.4.2.0-258/slider
drwxr-xr-x. 10  500  500 4096 Dec 14 11:28 spark2-client
lrwxrwxrwx.  1 root root   26 Dec 15 15:37 spark2-historyserver -> /usr/hdp/2.5.3.0-37/spark2
lrwxrwxrwx.  1 root root   26 Dec 15 15:37 spark2-thriftserver -> /usr/hdp/2.5.3.0-37/spark2
lrwxrwxrwx.  1 root root   26 Dec 14 10:50 spark-historyserver -> /usr/hdp/2.4.2.0-258/spark
lrwxrwxrwx.  1 root root   26 Dec 14 10:50 spark-thriftserver -> /usr/hdp/2.4.2.0-258/spark
lrwxrwxrwx.  1 root root   26 Dec 14 10:50 sqoop-client -> /usr/hdp/2.4.2.0-258/sqoop
lrwxrwxrwx.  1 root root   26 Dec 14 10:50 sqoop-server -> /usr/hdp/2.4.2.0-258/sqoop
lrwxrwxrwx.  1 root root   26 Dec 14 10:50 storm-client -> /usr/hdp/2.4.2.0-258/storm
lrwxrwxrwx.  1 root root   26 Dec 14 10:50 storm-nimbus -> /usr/hdp/2.4.2.0-258/storm
lrwxrwxrwx.  1 root root   40 Dec 14 10:50 storm-slider-client -> /usr/hdp/2.4.2.0-258/storm-slider-client
lrwxrwxrwx.  1 root root   26 Dec 14 10:50 storm-supervisor -> /usr/hdp/2.4.2.0-258/storm
lrwxrwxrwx.  1 root root   23 Dec 15 18:00 tez-client -> /usr/hdp/2.5.3.0-37/tez
lrwxrwxrwx.  1 root root   29 Dec 14 10:50 zeppelin-server -> /usr/hdp/2.4.2.0-258/zeppelin
lrwxrwxrwx.  1 root root   29 Dec 15 14:06 zookeeper-client -> /usr/hdp/2.5.3.0-37/zookeeper
lrwxrwxrwx.  1 root root   30 Dec 14 10:50 zookeeper-server -> /usr/hdp/2.4.2.0-258/zookeeper
Highlighted

Re: Error while adding service

Super Collaborator

There's your problem:

drwxr-xr-x. 10  500  500 4096 Dec 14 11:28 spark2-client

Directories are not allowed in there. If you remove this it will fix the problem

Don't have an account?
Coming from Hortonworks? Activate your account here