Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

zeppelin service not starting via HDP 2.4 Ambari

zeppelin service not starting via HDP 2.4 Ambari

Explorer

I have successfully installed Zeppelin asper link

https://community.hortonworks.com/articles/34424/apache-zeppelin-on-hdp-242.html. I didn't implement optional security functionality. After rebooting my singles node cluster I am not able to start Zepplein service via Ambari. Below error dump

partial dump:

resource_management.core.exceptions.Fail: Execution of '/usr/hdp/current/zeppelin-server/lib/bin/zeppelin-daemon.sh start >> /var/log/zeppelin/zeppelin-setup.log' returned 1. mkdir: cannot create directory ‘/var/run/zeppelin-notebook’: Permission denied

Full dump:

stderr: 
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.4/services/ZEPPELIN/package/scripts/master.py", line 235, in <module>
  Master().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
  method(env)
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.4/services/ZEPPELIN/package/scripts/master.py", line 169, in start
  + params.zeppelin_log_file, user=params.zeppelin_user)
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
  self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
  self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
  provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 238, in action_run
  tries=self.resource.tries, try_sleep=self.resource.try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner
  result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call
  tries=tries, try_sleep=try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper
  result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call
  raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of '/usr/hdp/current/zeppelin-server/lib/bin/zeppelin-daemon.sh start >> /var/log/zeppelin/zeppelin-setup.log' returned 1. mkdir: cannot create directory ‘/var/run/zeppelin-notebook’: Permission denied
/usr/hdp/current/zeppelin-server/lib/bin/zeppelin-daemon.sh: line 182: /var/run/zeppelin-notebook/zeppelin-zeppelin-gaja.hdp.com.pid: No such file or directory
cat: /var/run/zeppelin-notebook/zeppelin-zeppelin-gaja.hdp.com.pid: No such file or directory
 stdout:
2016-07-11 13:25:51,903 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.0.0-169
2016-07-11 13:25:51,903 - Checking if need to create versioned conf dir /etc/hadoop/2.4.0.0-169/0
2016-07-11 13:25:51,903 - call['conf-select create-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-07-11 13:25:55,306 - call returned (1, '/etc/hadoop/2.4.0.0-169/0 exist already', '')
2016-07-11 13:25:55,307 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-07-11 13:25:55,551 - checked_call returned (0, '/usr/hdp/2.4.0.0-169/hadoop/conf -> /etc/hadoop/2.4.0.0-169/0')
2016-07-11 13:25:55,551 - Ensuring that hadoop has the correct symlink structure
2016-07-11 13:25:55,570 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-07-11 13:26:04,756 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.0.0-169
2016-07-11 13:26:04,756 - Checking if need to create versioned conf dir /etc/hadoop/2.4.0.0-169/0
2016-07-11 13:26:04,757 - call['conf-select create-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-07-11 13:26:06,507 - call returned (1, '/etc/hadoop/2.4.0.0-169/0 exist already', '')
2016-07-11 13:26:06,507 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-07-11 13:26:06,533 - checked_call returned (0, '/usr/hdp/2.4.0.0-169/hadoop/conf -> /etc/hadoop/2.4.0.0-169/0')
2016-07-11 13:26:06,533 - Ensuring that hadoop has the correct symlink structure
2016-07-11 13:26:06,533 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-07-11 13:26:06,535 - Group['spark'] {}
2016-07-11 13:26:06,652 - Group['zeppelin'] {}
2016-07-11 13:26:06,653 - Group['hadoop'] {}
2016-07-11 13:26:06,653 - Group['users'] {}
2016-07-11 13:26:06,653 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-07-11 13:26:06,719 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-07-11 13:26:06,720 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2016-07-11 13:26:06,720 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-07-11 13:26:06,721 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2016-07-11 13:26:06,722 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-07-11 13:26:06,722 - User['mahout'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-07-11 13:26:06,723 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-07-11 13:26:06,724 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2016-07-11 13:26:06,724 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-07-11 13:26:06,725 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-07-11 13:26:06,726 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-07-11 13:26:06,727 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-07-11 13:26:06,727 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-07-11 13:26:06,728 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-07-11 13:26:06,729 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-07-11 13:26:07,067 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2016-07-11 13:26:07,137 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2016-07-11 13:26:07,138 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'recursive': True, 'mode': 0775, 'cd_access': 'a'}
2016-07-11 13:26:07,139 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-07-11 13:26:07,140 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2016-07-11 13:26:07,145 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2016-07-11 13:26:07,146 - Group['hdfs'] {}
2016-07-11 13:26:07,146 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']}
2016-07-11 13:26:07,147 - FS Type: 
2016-07-11 13:26:07,147 - Directory['/etc/hadoop'] {'mode': 0755}
2016-07-11 13:26:07,300 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2016-07-11 13:26:07,301 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0777}
2016-07-11 13:26:07,535 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2016-07-11 13:26:07,596 - Skipping Execute[('setenforce', '0')] due to not_if
2016-07-11 13:26:07,597 - Directory['/var/log/hadoop'] {'owner': 'root', 'mode': 0775, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2016-07-11 13:26:07,599 - Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root', 'recursive': True, 'cd_access': 'a'}
2016-07-11 13:26:07,599 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive': True, 'cd_access': 'a'}
2016-07-11 13:26:07,687 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2016-07-11 13:26:07,718 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2016-07-11 13:26:07,727 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2016-07-11 13:26:07,810 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs', 'group': 'hadoop'}
2016-07-11 13:26:07,855 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2016-07-11 13:26:07,926 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}

please advise

thanks

Pal

9 REPLIES 9

Re: zeppelin service not starting via HDP 2.4 Ambari

For some reason your Zeppelin pid dir didn't get created.

 mkdir: cannot create directory ‘/var/run/zeppelin-notebook’: Permission denied

Can you create it manually and retry starting Zeppelin? Run the below commands (as root):

mkdir -p /var/run/zeppelin-notebook
chown -R zeppelin:zeppelin /var/run/zeppelin-notebook

Re: zeppelin service not starting via HDP 2.4 Ambari

Explorer

Hi Ali,

Thanks for update, after posting the error to website. I changed the permission of /var/run (sudo chmod 777) and started the service again. This time the service started and I was able to access Zepplin (http://hostname:9995) but Ambari dashboard was still showing "Red" instead of screen. I have attached the log

no-error-zeepelin-notebook.txt

thanks

Pal

Highlighted

Re: zeppelin service not starting via HDP 2.4 Ambari

New Contributor

Re: zeppelin service not starting via HDP 2.4 Ambari

New Contributor

Review /var/run/zeppelin/zepplin.pid value with you java process that runs zeppelin.

Re: zeppelin service not starting via HDP 2.4 Ambari

Expert Contributor

@just np Having /var/run/zeppelin can be a risky business according to my experience. I have defined $ZEPPELIN_HOME/run for the pid file and in $ZEPPELIN_HOME/conf/zeppelin-env.sh, I ve added the following: export ZEPPELIN_PID_DIR=${ZEPPELIN_HOME}/run

Hope this helps.

Re: zeppelin service not starting via HDP 2.4 Ambari

Rising Star

I have faced a similar issue, but mine had to to do with insufficient resources for too many services running. So had to increase the number of data nodes and made changes to YARN and it got resolved.

Re: zeppelin service not starting via HDP 2.4 Ambari

Expert Contributor

@Geetha Anne

Pure curiousity: how did adding a datanode help solving the starting zeppelin issue? Can you please write more about this - the error message, changes in YARN, etc.

Re: zeppelin service not starting via HDP 2.4 Ambari

New Contributor

@marko, unable to define the pid directory in the configuration file as ambari re-writes that file. Additionally, the configuration for the pid directory is not editable within the ambari interface.

I hacked my /sbin/mbari-agent to include these lines near the top:

mkdir -p /var/run/zeppelin 
chown zeppelin:zeppelin /var/run/zeppelin

/var/run on many linux instances is a temporary file system. Solutions to manually create /var/run/zeppelin will fail upon the next reboot. The zeppelin startup process needs to have the root account create the pid directory. (I certainly do not advocate my change above as a solution.)

Re: zeppelin service not starting via HDP 2.4 Ambari

Expert Contributor

@John Slankas Very useful tip.

I am not using Spark nor Zeppelin from Hortonworks but rather install them manually. I feel I do not get the control I want from Ambari. Plus the mess with Spark 1.6 and 2.0 and Zeppelin can only use 1.6... To early to install them from Ambari.

I hope this changes with new versions.

I do however wonder why is the proprerty for ZEPPELIN PID protected.

Don't have an account?
Coming from Hortonworks? Activate your account here