Support Questions
Find answers, ask questions, and share your expertise

Zepplin install fails on Ambari 2.2 HDP 2.4.2

Explorer

We have changed ambari account as non-interactive and implemented sudo rules as per HW documentation, but after that I'm getting below error.

stderr: /var/lib/ambari-agent/data/errors-9069.txt
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.4/services/ZEPPELIN/package/scripts/master.py", line 235, in <module>
    Master().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.4/services/ZEPPELIN/package/scripts/master.py", line 85, in install
    user=params.zeppelin_user)
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 238, in action_run
    tries=self.resource.tries, try_sleep=self.resource.try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner
    result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call
    tries=tries, try_sleep=try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call
    raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of '/var/lib/ambari-agent/cache/stacks/HDP/2.4/services/ZEPPELIN/package/scripts/setup_snapshot.sh /usr/hdp/current/zeppelin-server/lib plsq00041m2xxxxxx.com 9083 10001 plsq00022xxxxxxx.com 9995 True /var/lib/ambari-agent/cache/stacks/HDP/2.4/services/ZEPPELIN/package /usr/lib/jvm/jre-1.7.0-openjdk.x86_64 >> /var/log/zeppelin/zeppelin-setup.log' returned 1. sudo: no tty present and no askpass program specified
stdout: /var/lib/ambari-agent/data/output-9069.txt
2016-10-14 09:28:46,110 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.2.0-258
2016-10-14 09:28:46,110 - Checking if need to create versioned conf dir /etc/hadoop/2.4.2.0-258/0
2016-10-14 09:28:46,110 - call['conf-select create-conf-dir --package hadoop --stack-version 2.4.2.0-258 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-10-14 09:28:46,148 - call returned (1, '/etc/hadoop/2.4.2.0-258/0 exist already', '')
2016-10-14 09:28:46,149 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.4.2.0-258 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-10-14 09:28:46,189 - checked_call returned (0, '')
2016-10-14 09:28:46,189 - Ensuring that hadoop has the correct symlink structure
2016-10-14 09:28:46,190 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-10-14 09:28:46,191 - Group['hadoop'] {}
2016-10-14 09:28:46,193 - Group['users'] {}
2016-10-14 09:28:46,193 - Group['zeppelin'] {}
2016-10-14 09:28:46,193 - Group['ranger'] {}
2016-10-14 09:28:46,194 - Group['spark'] {}
2016-10-14 09:28:46,194 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-10-14 09:28:46,194 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-10-14 09:28:46,195 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-10-14 09:28:46,195 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-10-14 09:28:46,196 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-10-14 09:28:46,196 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['ranger']}
2016-10-14 09:28:46,197 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-10-14 09:28:46,197 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-10-14 09:28:46,198 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-10-14 09:28:46,198 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-10-14 09:28:46,199 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-10-14 09:28:46,199 - User['mahout'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-10-14 09:28:46,200 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-10-14 09:28:46,200 - User['falcon'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-10-14 09:28:46,201 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-10-14 09:28:46,201 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-10-14 09:28:46,202 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-10-14 09:28:46,202 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-10-14 09:28:46,202 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-10-14 09:28:46,203 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-10-14 09:28:46,322 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2016-10-14 09:28:46,327 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2016-10-14 09:28:46,328 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'recursive': True, 'mode': 0775, 'cd_access': 'a'}
2016-10-14 09:28:46,492 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-10-14 09:28:46,618 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2016-10-14 09:28:46,625 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2016-10-14 09:28:46,626 - Group['hdfs'] {}
2016-10-14 09:28:46,627 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'hdfs']}
2016-10-14 09:28:46,628 - FS Type: 
2016-10-14 09:28:46,628 - Directory['/etc/hadoop'] {'mode': 0755}
2016-10-14 09:28:46,707 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2016-10-14 09:28:46,803 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0777}
2016-10-14 09:28:46,909 - Repository['HDP-2.4'] {'base_url': 'http://plsq00029m1.corp.sprint.com/HDP/centos6/2.x/updates/2.4.2.0', 'action': ['create'], 'components': ['HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP', 'mirror_list': None}
2016-10-14 09:28:46,944 - File['/etc/yum.repos.d/HDP.repo'] {'content': '[HDP-2.4]\nname=HDP-2.4\nbaseurl=http://plsq00029m1.corp.sprint.com/HDP/centos6/2.x/updates/2.4.2.0\n\npath=/\nenabled=1\ngpgcheck=0'}
2016-10-14 09:28:47,034 - Repository['HDP-UTILS-1.1.0.20'] {'base_url': 'http://plsq00029m1.corp.sprint.com/HDP-UTILS-1.1.0.20/repos/centos6', 'action': ['create'], 'components': ['HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None}
2016-10-14 09:28:47,062 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': '[HDP-UTILS-1.1.0.20]\nname=HDP-UTILS-1.1.0.20\nbaseurl=http://plsq00029m1.corp.sprint.com/HDP-UTILS-1.1.0.20/repos/centos6\n\npath=/\nenabled=1\ngpgcheck=0'}
2016-10-14 09:28:47,143 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2016-10-14 09:28:47,372 - Skipping installation of existing package unzip
2016-10-14 09:28:47,372 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2016-10-14 09:28:47,508 - Skipping installation of existing package curl
2016-10-14 09:28:47,508 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2016-10-14 09:28:47,552 - Skipping installation of existing package hdp-select
2016-10-14 09:28:47,772 - Execute['find /var/lib/ambari-agent/cache/stacks/HDP/2.4/services/ZEPPELIN/package -iname "*.sh" | xargs chmod +x'] {}
2016-10-14 09:28:47,785 - Execute['echo platform.linux_distribution:Red Hat Enterprise Linux Server+6.7+Santiago'] {}
2016-10-14 09:28:47,791 - Package['gcc-gfortran'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2016-10-14 09:28:47,964 - Skipping installation of existing package gcc-gfortran
2016-10-14 09:28:47,965 - Package['blas-devel'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2016-10-14 09:28:48,101 - Skipping installation of existing package blas-devel
2016-10-14 09:28:48,102 - Package['lapack-devel'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2016-10-14 09:28:48,237 - Skipping installation of existing package lapack-devel
2016-10-14 09:28:48,238 - Package['python-devel'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2016-10-14 09:28:48,421 - Skipping installation of existing package python-devel
2016-10-14 09:28:48,422 - Package['python-pip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2016-10-14 09:28:48,557 - Skipping installation of existing package python-pip
2016-10-14 09:28:48,557 - Package['zeppelin'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2016-10-14 09:28:48,693 - Skipping installation of existing package zeppelin
2016-10-14 09:28:48,693 - Directory['/var/run/zeppelin-notebook'] {'owner': 'zeppelin', 'group': 'zeppelin', 'recursive': True}
2016-10-14 09:28:48,744 - Directory['/var/log/zeppelin'] {'owner': 'zeppelin', 'group': 'zeppelin', 'recursive': True}
2016-10-14 09:28:48,801 - Directory['/usr/hdp/current/zeppelin-server/lib'] {'owner': 'zeppelin', 'group': 'zeppelin', 'recursive': True}
2016-10-14 09:28:48,861 - File['/var/log/zeppelin/zeppelin-setup.log'] {'content': '', 'owner': 'zeppelin', 'group': 'zeppelin', 'mode': 0777}
2016-10-14 09:28:48,940 - Writing File['/var/log/zeppelin/zeppelin-setup.log'] because contents don't match
2016-10-14 09:28:49,003 - Execute['echo spark_version:1.6 detected for spark_home: /usr/hdp/current/spark-client/ >> /var/log/zeppelin/zeppelin-setup.log'] {}
2016-10-14 09:28:49,010 - XmlConfig['zeppelin-site.xml'] {'owner': 'zeppelin', 'group': 'zeppelin', 'conf_dir': '/usr/hdp/current/zeppelin-server/lib/conf', 'configurations': ...}
2016-10-14 09:28:49,025 - Generating config: /usr/hdp/current/zeppelin-server/lib/conf/zeppelin-site.xml
2016-10-14 09:28:49,026 - File['/usr/hdp/current/zeppelin-server/lib/conf/zeppelin-site.xml'] {'owner': 'zeppelin', 'content': InlineTemplate(...), 'group': 'zeppelin', 'mode': None, 'encoding': 'UTF-8'}
2016-10-14 09:28:49,150 - File['/usr/hdp/current/zeppelin-server/lib/conf/zeppelin-env.sh'] {'owner': 'zeppelin', 'content': InlineTemplate(...), 'group': 'zeppelin'}
2016-10-14 09:28:49,255 - Execute['/var/lib/ambari-agent/cache/stacks/HDP/2.4/services/ZEPPELIN/package/scripts/setup_snapshot.sh /usr/hdp/current/zeppelin-server/lib plsq00041xxxxxx.com 9083 10001 plsq00022exxxxxx.com 9995 True /var/lib/ambari-agent/cache/stacks/HDP/2.4/services/ZEPPELIN/package /u
sr/lib/jvm/jre-1.7.0-openjdk.x86_64 >> /var/log/zeppelin/zeppelin-setup.log'] {'user': 'zeppelin'}
5 REPLIES 5

Re: Zepplin install fails on Ambari 2.2 HDP 2.4.2

Explorer

You may have to add /var/lib/ambari-agent/cache/stacks/HDP/2.4/services/ZEPPELIN/package/scripts/setup_snapshot.sh to the list of sudo commands for ambari agent.

Re: Zepplin install fails on Ambari 2.2 HDP 2.4.2

Super Guru

sudo su and then run this:

/var/lib/ambari-agent/cache/stacks/HDP/2.4/services/ZEPPELIN/package/scripts/setup_snapshot.sh /usr/hdp/current/zeppelin-server/lib plsq00041.xxxxxxx.com 9083 10001 plsq00022.xxxxxxxxx.com 9995 True /var/lib/ambari-agent/cache/stacks/HDP/2.4/services/ZEPPELIN/package /usr/lib/jvm/jre-1.7.0-openjdk.x86_64 >> /var/log/zeppelin/zeppelin-setup.log

And see what error you get.

Also check out that log file

Re: Zepplin install fails on Ambari 2.2 HDP 2.4.2

Explorer
Changed ownership to ambari and changed permissions to 777 for this script file. I ran into another error.
$ ll /var/lib/ambari-agent/cache/stacks/HDP/2.4/services/ZEPPELIN/package/scripts/setup_snapshot.sh
-rw-r--r-- 1 ambari hdpadmin 3765 Oct 17 22:15 /var/lib/ambari-agent/cache/stacks/HDP/2.4/services/ZEPPELIN/package/scripts/setup_snapshot.sh
$ sudo chmod 777 /var/lib/ambari-agent/cache/stacks/HDP/2.4/services/ZEPPELIN/package/scripts/setup_snapshot.sh
$ /var/lib/ambari-agent/cache/stacks/HDP/2.4/services/ZEPPELIN/package/scripts/setup_snapshot.sh /usr/hdp/current/zeppelin-server/lib plsq00041.xxxxxxxxx.com 908>
/var/lib/ambari-agent/cache/stacks/HDP/2.4/services/ZEPPELIN/package/scripts/setup_snapshot.sh: line 71: conf/hive-site.xml: Permission denied

Re: Zepplin install fails on Ambari 2.2 HDP 2.4.2

@Deepak Vivaramneni You did not run the full command.

Re: Zepplin install fails on Ambari 2.2 HDP 2.4.2

To get Zeppelin Installed Properly:

  1. Install Livy AD user (If Cluster is Kerberos)
  2. Install Spark Client first on Zeppelin Node
  3. Copy hive-site.xml to /etc/spark/conf