Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Ambari installed Zepplin fails to deploy

avatar
New Contributor

I followed the steps in

Installing Zeppelin on an Ambari-Managed Cluster

at the deployment step I got the following error:

stderr: 
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.4/services/ZEPPELIN/package/scripts/master.py", line 235, in <module>
  Master().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
  method(env)
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.4/services/ZEPPELIN/package/scripts/master.py", line 36, in install
  import params
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.4/services/ZEPPELIN/package/scripts/params.py", line 65, in <module>
  fline = open(spark_home + "/RELEASE").readline().rstrip()
IOError: [Errno 2] No such file or directory: u'/usr/hdp/current/spark-client//RELEASE'
 stdout:
2016-06-08 13:38:24,720 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.2.0-258
2016-06-08 13:38:24,720 - Checking if need to create versioned conf dir /etc/hadoop/2.4.2.0-258/0
2016-06-08 13:38:24,720 - call['conf-select create-conf-dir --package hadoop --stack-version 2.4.2.0-258 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-06-08 13:38:24,740 - call returned (1, '/etc/hadoop/2.4.2.0-258/0 exist already', '')
2016-06-08 13:38:24,741 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.4.2.0-258 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-06-08 13:38:24,760 - checked_call returned (0, '')
2016-06-08 13:38:24,760 - Ensuring that hadoop has the correct symlink structure
2016-06-08 13:38:24,761 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-06-08 13:38:24,762 - Group['spark'] {}
2016-06-08 13:38:24,763 - Group['ranger'] {}
2016-06-08 13:38:24,763 - Group['zeppelin'] {}
2016-06-08 13:38:24,764 - Group['hadoop'] {}
2016-06-08 13:38:24,764 - Group['users'] {}
2016-06-08 13:38:24,764 - Group['knox'] {}
2016-06-08 13:38:24,764 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-06-08 13:38:24,765 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-06-08 13:38:24,765 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2016-06-08 13:38:24,766 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-06-08 13:38:24,766 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-06-08 13:38:24,767 - User['falcon'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2016-06-08 13:38:24,768 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'ranger']}
2016-06-08 13:38:24,768 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2016-06-08 13:38:24,769 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-06-08 13:38:24,769 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-06-08 13:38:24,770 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2016-06-08 13:38:24,770 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-06-08 13:38:24,771 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-06-08 13:38:24,771 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-06-08 13:38:24,772 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-06-08 13:38:24,772 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-06-08 13:38:24,773 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-06-08 13:38:24,773 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-06-08 13:38:24,774 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-06-08 13:38:24,775 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2016-06-08 13:38:24,779 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2016-06-08 13:38:24,779 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'recursive': True, 'mode': 0775, 'cd_access': 'a'}
2016-06-08 13:38:24,780 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-06-08 13:38:24,781 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2016-06-08 13:38:24,785 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2016-06-08 13:38:24,785 - Group['hdfs'] {}
2016-06-08 13:38:24,785 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']}
2016-06-08 13:38:24,786 - FS Type: 
2016-06-08 13:38:24,786 - Directory['/etc/hadoop'] {'mode': 0755}
2016-06-08 13:38:24,797 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2016-06-08 13:38:24,797 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0777}
2016-06-08 13:38:24,808 - Repository['HDP-2.4'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.4.2.0', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP', 'mirror_list': None}
2016-06-08 13:38:24,814 - File['/etc/yum.repos.d/HDP.repo'] {'content': '[HDP-2.4]\nname=HDP-2.4\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.4.2.0\n\npath=/\nenabled=1\ngpgcheck=0'}
2016-06-08 13:38:24,815 - Repository['HDP-UTILS-1.1.0.20'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.20/repos/centos7', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None}
2016-06-08 13:38:24,818 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': '[HDP-UTILS-1.1.0.20]\nname=HDP-UTILS-1.1.0.20\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.20/repos/centos7\n\npath=/\nenabled=1\ngpgcheck=0'}
2016-06-08 13:38:24,818 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2016-06-08 13:38:24,934 - Skipping installation of existing package unzip
2016-06-08 13:38:24,934 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2016-06-08 13:38:24,947 - Skipping installation of existing package curl
2016-06-08 13:38:24,947 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2016-06-08 13:38:24,960 - Skipping installation of existing package hdp-select
1 ACCEPTED SOLUTION

avatar
Master Guru

@Wayne Vovil Are you running this on CentOS? If so try this (Found this on HCC post here):

  • To use Ambari to manage (start/stop) the Zeppelin service, run the following commands on the node running Ambari server. For example, on CentOS 6.*:
  1. yum install -y git
  2. VERSION=`hdp-select status hadoop-client | sed 's/hadoop-client - \([0-9]\.[0-9]\).*/\1/'`
  3. sudo git clone https://github.com/hortonworks-gallery/ambari-zeppelin-service.git /var/lib/ambari-server/resources/stacks/HDP/$VERSION/services/ZEPPELIN
  4. sudo service ambari-server restart
  • On a node (call it 'Node A') that is not running Ambari server, install the nss package:
  1. yum install -y nss
  • Once Ambari is back up and you've install the nss package to "Node A", in Ambari, go Actions -> Add service -> check Zeppelin service -> Place the Zeppelin service on Node A in the assign masters step and click Next -> Next -> Next -> Deploy.
  • The installation will start once you click Deploy
  • Once complete, the Zeppelin Notebook service will be running. You can navigate to http://<FQDN of Node A>:9995 or follow the stepshere to create the Ambari view.

View solution in original post

3 REPLIES 3

avatar
Master Guru

@Wayne Vovil Are you running this on CentOS? If so try this (Found this on HCC post here):

  • To use Ambari to manage (start/stop) the Zeppelin service, run the following commands on the node running Ambari server. For example, on CentOS 6.*:
  1. yum install -y git
  2. VERSION=`hdp-select status hadoop-client | sed 's/hadoop-client - \([0-9]\.[0-9]\).*/\1/'`
  3. sudo git clone https://github.com/hortonworks-gallery/ambari-zeppelin-service.git /var/lib/ambari-server/resources/stacks/HDP/$VERSION/services/ZEPPELIN
  4. sudo service ambari-server restart
  • On a node (call it 'Node A') that is not running Ambari server, install the nss package:
  1. yum install -y nss
  • Once Ambari is back up and you've install the nss package to "Node A", in Ambari, go Actions -> Add service -> check Zeppelin service -> Place the Zeppelin service on Node A in the assign masters step and click Next -> Next -> Next -> Deploy.
  • The installation will start once you click Deploy
  • Once complete, the Zeppelin Notebook service will be running. You can navigate to http://<FQDN of Node A>:9995 or follow the stepshere to create the Ambari view.

avatar

Can you make sure you have spark client installed on the node you are trying to install Zeppelin on? If so, make sure this file exists on that node: /usr/hdp/current/spark-client/RELEASE?

This is the error causing the failure on your setup:

IOError: [Errno 2] No such file or directory: u'/usr/hdp/current/spark-client//RELEASE'

On my 2.4.2.0-195 setup, this is what I see in the contents of that file

# cat /usr/hdp/current/spark-client/RELEASE
Spark 1.6.1.2.4.2.0-195 built for Hadoop 2.7.1.2.4.2.0-195

avatar

@Wayne Vovil did u try this?