Created 10-10-2016 10:51 PM
Hi,
I installed the Zeppelin notebook 0.6.0 service on Ambari 2.4.0.1 on a mini 2 Vms cluster without any trouble and all was running OK (each tutorial notebook is running fine). Of course, several other services are installed as well : HDFS 2.7.3, Yarn 2.7.3, MapReduce2 2.7.3, Zookeeper 3.4.6 , Ambari metrics 0.1.0, Tez 0.7.0, Hive 1.2.1000, Pig 0.16.0, Spark 1.6.2, Slider 0.91.0
After restart of the VMs, all is still running OK but despite the fact the Zeppelin is running fine and i still can run every tutorial thru the http://host:9995 webUI, the Ambari server tells me Zeppelin couldn't start and is not running.
How can i handle this problem ?
note : already granted the missing write permissions for the zeppelin user in /var/run :
drwxr-xrwx 2 root root 60 Oct 10 15:02 zeppelin
Logs below :
/var/lib/ambari-agent/data/errors-230.txt
Traceback (most recent call last): File "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/master.py", line 312, in <module> Master().execute() File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute method(env) File "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/master.py", line 185, in start + params.zeppelin_log_file, user=params.zeppelin_user) File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__ self.env.run() File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run self.run_action(resource, action) File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action provider_action() File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 273, in action_run tries=self.resource.tries, try_sleep=self.resource.try_sleep) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 71, in inner result = function(command, **kwargs) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 93, in checked_call tries=tries, try_sleep=try_sleep) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 141, in _call_wrapper result = _call(command, **kwargs_copy) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 294, in _call raise Fail(err_msg) resource_management.core.exceptions.Fail: Execution of '/usr/hdp/current/zeppelin-server/bin/zeppelin-daemon.sh restart >> /var/log/zeppelin/zeppelin-setup.log' returned 1.
/var/lib/ambari-agent/data/output-230.txt
2016-10-10 15:00:24,984 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.0.0-1245 2016-10-10 15:00:24,988 - Checking if need to create versioned conf dir /etc/hadoop/2.5.0.0-1245/0 2016-10-10 15:00:24,991 - call[('ambari-python-wrap', u'/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1} 2016-10-10 15:00:25,026 - call returned (1, '/etc/hadoop/2.5.0.0-1245/0 exist already', '') 2016-10-10 15:00:25,027 - checked_call[('ambari-python-wrap', u'/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False} 2016-10-10 15:00:25,058 - checked_call returned (0, '') 2016-10-10 15:00:25,058 - Ensuring that hadoop has the correct symlink structure 2016-10-10 15:00:25,059 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2016-10-10 15:00:25,238 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.0.0-1245 2016-10-10 15:00:25,240 - Checking if need to create versioned conf dir /etc/hadoop/2.5.0.0-1245/0 2016-10-10 15:00:25,242 - call[('ambari-python-wrap', u'/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1} 2016-10-10 15:00:25,280 - call returned (1, '/etc/hadoop/2.5.0.0-1245/0 exist already', '') 2016-10-10 15:00:25,281 - checked_call[('ambari-python-wrap', u'/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False} 2016-10-10 15:00:25,307 - checked_call returned (0, '') 2016-10-10 15:00:25,308 - Ensuring that hadoop has the correct symlink structure 2016-10-10 15:00:25,308 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2016-10-10 15:00:25,310 - Group['livy'] {} 2016-10-10 15:00:25,312 - Group['spark'] {} 2016-10-10 15:00:25,312 - Group['zeppelin'] {} 2016-10-10 15:00:25,313 - Group['hadoop'] {} 2016-10-10 15:00:25,313 - Group['users'] {} 2016-10-10 15:00:25,313 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-10-10 15:00:25,315 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-10-10 15:00:25,316 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-10-10 15:00:25,317 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-10-10 15:00:25,318 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-10-10 15:00:25,319 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']} 2016-10-10 15:00:25,320 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']} 2016-10-10 15:00:25,321 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-10-10 15:00:25,322 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-10-10 15:00:25,322 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-10-10 15:00:25,323 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-10-10 15:00:25,323 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-10-10 15:00:25,324 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2016-10-10 15:00:25,326 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'} 2016-10-10 15:00:25,334 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if 2016-10-10 15:00:25,335 - Group['hdfs'] {} 2016-10-10 15:00:25,335 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']} 2016-10-10 15:00:25,336 - FS Type: 2016-10-10 15:00:25,336 - Directory['/etc/hadoop'] {'mode': 0755} 2016-10-10 15:00:25,363 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'} 2016-10-10 15:00:25,364 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777} 2016-10-10 15:00:25,380 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'} 2016-10-10 15:00:25,392 - Skipping Execute[('setenforce', '0')] due to not_if 2016-10-10 15:00:25,393 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'} 2016-10-10 15:00:25,398 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'} 2016-10-10 15:00:25,399 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'} 2016-10-10 15:00:25,405 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'} 2016-10-10 15:00:25,407 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'} 2016-10-10 15:00:25,408 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644} 2016-10-10 15:00:25,426 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs', 'group': 'hadoop'} 2016-10-10 15:00:25,427 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755} 2016-10-10 15:00:25,428 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'} 2016-10-10 15:00:25,433 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'} 2016-10-10 15:00:25,438 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755} 2016-10-10 15:00:25,664 - call['ambari-python-wrap /usr/bin/hdp-select status spark-client'] {'timeout': 20} 2016-10-10 15:00:25,691 - call returned (0, 'spark-client - 2.5.0.0-1245') 2016-10-10 15:00:25,703 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.0.0-1245 2016-10-10 15:00:25,706 - Checking if need to create versioned conf dir /etc/hadoop/2.5.0.0-1245/0 2016-10-10 15:00:25,709 - call[('ambari-python-wrap', u'/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1} 2016-10-10 15:00:25,751 - call returned (1, '/etc/hadoop/2.5.0.0-1245/0 exist already', '') 2016-10-10 15:00:25,751 - checked_call[('ambari-python-wrap', u'/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False} 2016-10-10 15:00:25,788 - checked_call returned (0, '') 2016-10-10 15:00:25,789 - Ensuring that hadoop has the correct symlink structure 2016-10-10 15:00:25,789 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2016-10-10 15:00:25,791 - Directory['/var/log/zeppelin'] {'owner': 'zeppelin', 'group': 'zeppelin', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'} 2016-10-10 15:00:25,794 - XmlConfig['zeppelin-site.xml'] {'owner': 'zeppelin', 'group': 'zeppelin', 'conf_dir': '/etc/zeppelin/conf', 'configurations': ...} 2016-10-10 15:00:25,813 - Generating config: /etc/zeppelin/conf/zeppelin-site.xml 2016-10-10 15:00:25,813 - File['/etc/zeppelin/conf/zeppelin-site.xml'] {'owner': 'zeppelin', 'content': InlineTemplate(...), 'group': 'zeppelin', 'mode': None, 'encoding': 'UTF-8'} 2016-10-10 15:00:25,841 - File['/etc/zeppelin/conf/zeppelin-env.sh'] {'owner': 'zeppelin', 'content': InlineTemplate(...), 'group': 'zeppelin'} 2016-10-10 15:00:25,843 - File['/etc/zeppelin/conf/shiro.ini'] {'owner': 'zeppelin', 'content': InlineTemplate(...), 'group': 'zeppelin'} 2016-10-10 15:00:25,843 - File['/etc/zeppelin/conf/log4j.properties'] {'owner': 'zeppelin', 'content': ..., 'group': 'zeppelin'} 2016-10-10 15:00:25,844 - File['/etc/zeppelin/conf/hive-site.xml'] {'owner': 'zeppelin', 'content': StaticFile('/etc/spark/conf/hive-site.xml'), 'group': 'zeppelin'} 2016-10-10 15:00:25,845 - Execute[('chown', '-R', u'zeppelin:zeppelin', '/etc/zeppelin')] {'sudo': True} 2016-10-10 15:00:25,857 - HdfsResource['/user/zeppelin'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'default_fs': 'hdfs://agent1.localdomain:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'zeppelin', 'recursive_chown': True, 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'recursive_chmod': True} 2016-10-10 15:00:25,860 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://agent1.localdomain:50070/webhdfs/v1/user/zeppelin?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpotXk6W 2>/tmp/tmpA6ldGq''] {'logoutput': None, 'quiet': False} 2016-10-10 15:00:25,946 - call returned (0, '') 2016-10-10 15:00:25,949 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://agent1.localdomain:50070/webhdfs/v1/user/zeppelin?op=SETOWNER&user.name=hdfs&owner=zeppelin&group='"'"' 1>/tmp/tmpWE0klB 2>/tmp/tmptOjvWT''] {'logoutput': None, 'quiet': False} 2016-10-10 15:00:26,037 - call returned (0, '') 2016-10-10 15:00:26,038 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://agent1.localdomain:50070/webhdfs/v1/user/zeppelin?op=GETCONTENTSUMMARY&user.name=hdfs'"'"' 1>/tmp/tmpFsD_aU 2>/tmp/tmp9DepAD''] {'logoutput': None, 'quiet': False} 2016-10-10 15:00:26,119 - call returned (0, '') 2016-10-10 15:00:26,121 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://agent1.localdomain:50070/webhdfs/v1/user/zeppelin?op=LISTSTATUS&user.name=hdfs'"'"' 1>/tmp/tmp7sJNsy 2>/tmp/tmpuRPNXV''] {'logoutput': None, 'quiet': False} 2016-10-10 15:00:26,201 - call returned (0, '') 2016-10-10 15:00:26,203 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://agent1.localdomain:50070/webhdfs/v1/user/zeppelin/.sparkStaging?op=LISTSTATUS&user.name=hdfs'"'"' 1>/tmp/tmp_ZOSs4 2>/tmp/tmpMngslF''] {'logoutput': None, 'quiet': False} 2016-10-10 15:00:26,287 - call returned (0, '') 2016-10-10 15:00:26,289 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://agent1.localdomain:50070/webhdfs/v1/user/zeppelin/.sparkStaging/application_1476105264579_0001?op=LISTSTATUS&user.name=hdfs'"'"' 1>/tmp/tmp59sajr 2>/tmp/tmpTv_PEm''] {'logoutput': None, 'quiet': False} 2016-10-10 15:00:26,376 - call returned (0, '') 2016-10-10 15:00:26,379 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://agent1.localdomain:50070/webhdfs/v1/user/zeppelin/test?op=LISTSTATUS&user.name=hdfs'"'"' 1>/tmp/tmpw6WwXu 2>/tmp/tmpDh5IPR''] {'logoutput': None, 'quiet': False} 2016-10-10 15:00:26,455 - call returned (0, '') 2016-10-10 15:00:26,458 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://agent1.localdomain:50070/webhdfs/v1/user/zeppelin/.sparkStaging?op=SETOWNER&user.name=hdfs&owner=zeppelin&group='"'"' 1>/tmp/tmpQGEQgd 2>/tmp/tmpNLr0Bf''] {'logoutput': None, 'quiet': False} 2016-10-10 15:00:26,539 - call returned (0, '') 2016-10-10 15:00:26,542 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://agent1.localdomain:50070/webhdfs/v1/user/zeppelin/.sparkStaging/application_1476105264579_0001?op=SETOWNER&user.name=hdfs&owner=zeppelin&group='"'"' 1>/tmp/tmpjQcums 2>/tmp/tmpm4EODW''] {'logoutput': None, 'quiet': False} 2016-10-10 15:00:26,632 - call returned (0, '') 2016-10-10 15:00:26,634 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://agent1.localdomain:50070/webhdfs/v1/user/zeppelin/.sparkStaging/application_1476105264579_0001/__spark_conf__8200828060831578373.zip?op=SETOWNER&user.name=hdfs&owner=zeppelin&group='"'"' 1>/tmp/tmpG1inKg 2>/tmp/tmpw9s4Tt''] {'logoutput': None, 'quiet': False} 2016-10-10 15:00:26,717 - call returned (0, '') 2016-10-10 15:00:26,719 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://agent1.localdomain:50070/webhdfs/v1/user/zeppelin/.sparkStaging/application_1476105264579_0001/py4j-0.9-src.zip?op=SETOWNER&user.name=hdfs&owner=zeppelin&group='"'"' 1>/tmp/tmpoVK6P7 2>/tmp/tmpWe0oW4''] {'logoutput': None, 'quiet': False} 2016-10-10 15:00:26,817 - call returned (0, '') 2016-10-10 15:00:26,819 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://agent1.localdomain:50070/webhdfs/v1/user/zeppelin/.sparkStaging/application_1476105264579_0001/pyspark.zip?op=SETOWNER&user.name=hdfs&owner=zeppelin&group='"'"' 1>/tmp/tmpIWmxTg 2>/tmp/tmpqgSiQd''] {'logoutput': None, 'quiet': False} 2016-10-10 15:00:26,908 - call returned (0, '') 2016-10-10 15:00:26,910 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://agent1.localdomain:50070/webhdfs/v1/user/zeppelin/test?op=SETOWNER&user.name=hdfs&owner=zeppelin&group='"'"' 1>/tmp/tmpNPdI9i 2>/tmp/tmpx9MbqL''] {'logoutput': None, 'quiet': False} 2016-10-10 15:00:26,988 - call returned (0, '') 2016-10-10 15:00:26,990 - HdfsResource['/user/zeppelin/test'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'default_fs': 'hdfs://agent1.localdomain:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'zeppelin', 'recursive_chown': True, 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'recursive_chmod': True} 2016-10-10 15:00:26,991 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://agent1.localdomain:50070/webhdfs/v1/user/zeppelin/test?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmppawLod 2>/tmp/tmpSnha8Z''] {'logoutput': None, 'quiet': False} 2016-10-10 15:00:27,063 - call returned (0, '') 2016-10-10 15:00:27,066 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://agent1.localdomain:50070/webhdfs/v1/user/zeppelin/test?op=SETOWNER&user.name=hdfs&owner=zeppelin&group='"'"' 1>/tmp/tmph_DSHa 2>/tmp/tmpQNjJOt''] {'logoutput': None, 'quiet': False} 2016-10-10 15:00:27,154 - call returned (0, '') 2016-10-10 15:00:27,156 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://agent1.localdomain:50070/webhdfs/v1/user/zeppelin/test?op=GETCONTENTSUMMARY&user.name=hdfs'"'"' 1>/tmp/tmp5KPymV 2>/tmp/tmp9lOMtH''] {'logoutput': None, 'quiet': False} 2016-10-10 15:00:27,226 - call returned (0, '') 2016-10-10 15:00:27,227 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://agent1.localdomain:50070/webhdfs/v1/user/zeppelin/test?op=LISTSTATUS&user.name=hdfs'"'"' 1>/tmp/tmpUKccJS 2>/tmp/tmp4wVafe''] {'logoutput': None, 'quiet': False} 2016-10-10 15:00:27,303 - call returned (0, '') 2016-10-10 15:00:27,304 - HdfsResource['/apps/zeppelin'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'default_fs': 'hdfs://agent1.localdomain:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'zeppelin', 'recursive_chown': True, 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'recursive_chmod': True} 2016-10-10 15:00:27,306 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://agent1.localdomain:50070/webhdfs/v1/apps/zeppelin?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpXn4aIi 2>/tmp/tmpx7cGcf''] {'logoutput': None, 'quiet': False} 2016-10-10 15:00:27,389 - call returned (0, '') 2016-10-10 15:00:27,391 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://agent1.localdomain:50070/webhdfs/v1/apps/zeppelin?op=SETOWNER&user.name=hdfs&owner=zeppelin&group='"'"' 1>/tmp/tmpBymEI3 2>/tmp/tmp8DHHtM''] {'logoutput': None, 'quiet': False} 2016-10-10 15:00:27,464 - call returned (0, '') 2016-10-10 15:00:27,467 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://agent1.localdomain:50070/webhdfs/v1/apps/zeppelin?op=GETCONTENTSUMMARY&user.name=hdfs'"'"' 1>/tmp/tmpEr97ib 2>/tmp/tmpUnX0Ud''] {'logoutput': None, 'quiet': False} 2016-10-10 15:00:27,558 - call returned (0, '') 2016-10-10 15:00:27,559 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://agent1.localdomain:50070/webhdfs/v1/apps/zeppelin?op=LISTSTATUS&user.name=hdfs'"'"' 1>/tmp/tmpfkdsLd 2>/tmp/tmpUPFxYR''] {'logoutput': None, 'quiet': False} 2016-10-10 15:00:27,646 - call returned (0, '') 2016-10-10 15:00:27,648 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://agent1.localdomain:50070/webhdfs/v1/apps/zeppelin/zeppelin-spark-dependencies-0.6.0.2.5.0.0-1245.jar?op=SETOWNER&user.name=hdfs&owner=zeppelin&group='"'"' 1>/tmp/tmpMh29gk 2>/tmp/tmpGhNoGa''] {'logoutput': None, 'quiet': False} 2016-10-10 15:00:27,732 - call returned (0, '') 2016-10-10 15:00:27,734 - HdfsResource['/apps/zeppelin/zeppelin-spark-dependencies-0.6.0.2.5.0.0-1245.jar'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'source': '/usr/hdp/current/zeppelin-server/interpreter/spark/dep/zeppelin-spark-dependencies-0.6.0.2.5.0.0-1245.jar', 'default_fs': 'hdfs://agent1.localdomain:8020', 'replace_existing_files': True, 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'zeppelin', 'group': 'zeppelin', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'file', 'action': ['create_on_execute'], 'mode': 0444} 2016-10-10 15:00:27,735 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://agent1.localdomain:50070/webhdfs/v1/apps/zeppelin/zeppelin-spark-dependencies-0.6.0.2.5.0.0-1245.jar?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpEyZwb_ 2>/tmp/tmpL0hhCy''] {'logoutput': None, 'quiet': False} 2016-10-10 15:00:27,814 - call returned (0, '') 2016-10-10 15:00:27,815 - DFS file /apps/zeppelin/zeppelin-spark-dependencies-0.6.0.2.5.0.0-1245.jar is identical to /usr/hdp/current/zeppelin-server/interpreter/spark/dep/zeppelin-spark-dependencies-0.6.0.2.5.0.0-1245.jar, skipping the copying 2016-10-10 15:00:27,816 - HdfsResource[None] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'default_fs': 'hdfs://agent1.localdomain:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf'} 2016-10-10 15:00:27,822 - File['/etc/zeppelin/conf/interpreter.json'] {'content': ..., 'owner': 'zeppelin', 'group': 'zeppelin'} 2016-10-10 15:00:27,822 - Writing File['/etc/zeppelin/conf/interpreter.json'] because contents don't match 2016-10-10 15:00:27,824 - Execute['/usr/hdp/current/zeppelin-server/bin/zeppelin-daemon.sh restart >> /var/log/zeppelin/zeppelin-setup.log'] {'user': 'zeppelin'} Command failed after 1 tries
/var/log/zeppelin/zeppelin-root-domain.log :
... INFO [2016-10-10 14:32:14,657] ({Thread-13} DependencyResolver.java[load]:100) - copy /usr/hdp/current/zeppelin-server/local-repo/org/tukaani/xz/1.0/xz-1.0.jar to /usr/hdp/current/zeppelin-server/local-repo/2BYUPU1YW INFO [2016-10-10 14:32:14,681] ({main} ContextHandler.java[doStart]:744) - Started o.e.j.w.WebAppContext@41ee392b{/,file:/usr/hdp/2.5.0.0-1245/zeppelin/webapps/webapp/,AVAILABLE}{/usr/hdp/current/zeppelin-server/lib/zeppelin-web-0.6.0.2.5.0.0-1245.war} WARN [2016-10-10 14:32:14,717] ({main} AbstractLifeCycle.java[setFailed]:212) - FAILED ServerConnector@5c20aab9{HTTP/1.1}{0.0.0.0:9995}: java.net.BindException: Address already in use java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:321) at org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80) at org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:236) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) at org.eclipse.jetty.server.Server.doStart(Server.java:366) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) at org.apache.zeppelin.server.ZeppelinServer.main(ZeppelinServer.java:116) WARN [2016-10-10 14:32:14,722] ({main} AbstractLifeCycle.java[setFailed]:212) - FAILED org.eclipse.jetty.server.Server@5d5b5fa7: java.net.BindException: Address already in use java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:321) at org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80) at org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:236) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) at org.eclipse.jetty.server.Server.doStart(Server.java:366) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) at org.apache.zeppelin.server.ZeppelinServer.main(ZeppelinServer.java:116) ERROR [2016-10-10 14:32:14,722] ({main} ZeppelinServer.java[main]:118) - Error while running jettyServer java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:321) at org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80) at org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:236) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) at org.eclipse.jetty.server.Server.doStart(Server.java:366) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) at org.apache.zeppelin.server.ZeppelinServer.main(ZeppelinServer.java:116)
/var/log/zeppelin/zeppelin-zeppelin-domain.log
... WARN [2016-10-10 15:02:04,913] ({main} AbstractLifeCycle.java[setFailed]:212) - FAILED ServerConnector@691a7f8f{HTTP/1.1}{0.0.0.0:9995}: java.net.BindException: Address already in use java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:321) at org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80) at org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:236) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) at org.eclipse.jetty.server.Server.doStart(Server.java:366) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) at org.apache.zeppelin.server.ZeppelinServer.main(ZeppelinServer.java:116) WARN [2016-10-10 15:02:04,914] ({main} AbstractLifeCycle.java[setFailed]:212) - FAILED org.eclipse.jetty.server.Server@161b062a: java.net.BindException: Address already in use java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:321) at org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80) at org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:236) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) at org.eclipse.jetty.server.Server.doStart(Server.java:366) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) at org.apache.zeppelin.server.ZeppelinServer.main(ZeppelinServer.java:116) ERROR [2016-10-10 15:02:04,914] ({main} ZeppelinServer.java[main]:118) - Error while running jettyServer java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:321) at org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80) at org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:236) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) at org.eclipse.jetty.server.Server.doStart(Server.java:366) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) at org.apache.zeppelin.server.ZeppelinServer.main(ZeppelinServer.java:116)
/var/log/zeppelin/zeppelin-zeppelin-domain.log
spark_version:1.6 detected for spark_home: /usr/hdp/current/spark-client Zeppelin start [ OK ] Zeppelin stop [ OK ] Zeppelin start [ OK ] Zeppelin is not running Pid dir doesn't exist, create /var/run/zeppelin Zeppelin start [ OK ] Zeppelin process died [FAILED] Zeppelin is not running Zeppelin is not running Pid dir doesn't exist, create /var/run/zeppelin Zeppelin start [ OK ] Zeppelin process died [FAILED] Zeppelin is not running Pid dir doesn't exist, create /var/run/zeppelin Zeppelin start [ OK ] Zeppelin process died [FAILED] Zeppelin is not running Zeppelin start [ OK ] Zeppelin process died [FAILED] Zeppelin is not running Zeppelin start [ OK ] Zeppelin process died [FAILED]
Created 10-12-2016 02:33 AM
Tried to put the Zeppelin in Maintenance mode, stop the whole cluster, stop the VMS, restart the vm, restart each servcie of the cluster. All is Ok despite Zeppelin is runnign but said not by Ambari. Added the following logs files wich are i guess a little more cleaner
stderr: /var/lib/ambari-agent/data/errors-285.txt
Traceback (most recent call last): File "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/master.py", line 312, in <module> Master().execute() File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute method(env) File "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/master.py", line 185, in start + params.zeppelin_log_file, user=params.zeppelin_user) File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__ self.env.run() File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run self.run_action(resource, action) File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action provider_action() File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 273, in action_run tries=self.resource.tries, try_sleep=self.resource.try_sleep) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 71, in inner result = function(command, **kwargs) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 93, in checked_call tries=tries, try_sleep=try_sleep) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 141, in _call_wrapper result = _call(command, **kwargs_copy) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 294, in _call raise Fail(err_msg) resource_management.core.exceptions.Fail: Execution of '/usr/hdp/current/zeppelin-server/bin/zeppelin-daemon.sh restart >> /var/log/zeppelin/zeppelin-setup.log' returned 1. mkdir: cannot create directory ‘/var/run/zeppelin’: Permission denied /usr/hdp/current/zeppelin-server/bin/zeppelin-daemon.sh: line 187: /var/run/zeppelin/zeppelin-zeppelin-master1.localdomain.pid: No such file or directory cat: /var/run/zeppelin/zeppelin-zeppelin-master1.localdomain.pid: No such file or directory
stdout: /var/lib/ambari-agent/data/output-285.txt
2016-10-11 12:46:23,904 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.0.0-1245 2016-10-11 12:46:23,906 - Checking if need to create versioned conf dir /etc/hadoop/2.5.0.0-1245/0 2016-10-11 12:46:23,908 - call[('ambari-python-wrap', u'/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1} 2016-10-11 12:46:23,942 - call returned (1, '/etc/hadoop/2.5.0.0-1245/0 exist already', '') 2016-10-11 12:46:23,943 - checked_call[('ambari-python-wrap', u'/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False} 2016-10-11 12:46:23,977 - checked_call returned (0, '') 2016-10-11 12:46:23,979 - Ensuring that hadoop has the correct symlink structure 2016-10-11 12:46:23,979 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2016-10-11 12:46:24,135 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.0.0-1245 2016-10-11 12:46:24,139 - Checking if need to create versioned conf dir /etc/hadoop/2.5.0.0-1245/0 2016-10-11 12:46:24,142 - call[('ambari-python-wrap', u'/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1} 2016-10-11 12:46:24,176 - call returned (1, '/etc/hadoop/2.5.0.0-1245/0 exist already', '') 2016-10-11 12:46:24,176 - checked_call[('ambari-python-wrap', u'/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False} 2016-10-11 12:46:24,206 - checked_call returned (0, '') 2016-10-11 12:46:24,207 - Ensuring that hadoop has the correct symlink structure 2016-10-11 12:46:24,207 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2016-10-11 12:46:24,209 - Group['livy'] {} 2016-10-11 12:46:24,212 - Group['spark'] {} 2016-10-11 12:46:24,212 - Group['zeppelin'] {} 2016-10-11 12:46:24,213 - Group['hadoop'] {} 2016-10-11 12:46:24,213 - Group['users'] {} 2016-10-11 12:46:24,214 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-10-11 12:46:24,215 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-10-11 12:46:24,217 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-10-11 12:46:24,218 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-10-11 12:46:24,219 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-10-11 12:46:24,220 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']} 2016-10-11 12:46:24,222 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']} 2016-10-11 12:46:24,223 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-10-11 12:46:24,224 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-10-11 12:46:24,225 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-10-11 12:46:24,226 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-10-11 12:46:24,227 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-10-11 12:46:24,229 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2016-10-11 12:46:24,231 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'} 2016-10-11 12:46:24,240 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if 2016-10-11 12:46:24,240 - Group['hdfs'] {} 2016-10-11 12:46:24,241 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']} 2016-10-11 12:46:24,242 - FS Type: 2016-10-11 12:46:24,242 - Directory['/etc/hadoop'] {'mode': 0755} 2016-10-11 12:46:24,266 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'} 2016-10-11 12:46:24,267 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777} 2016-10-11 12:46:24,283 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'} 2016-10-11 12:46:24,293 - Skipping Execute[('setenforce', '0')] due to not_if 2016-10-11 12:46:24,294 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'} 2016-10-11 12:46:24,297 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'} 2016-10-11 12:46:24,298 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'} 2016-10-11 12:46:24,306 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'} 2016-10-11 12:46:24,309 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'} 2016-10-11 12:46:24,310 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644} 2016-10-11 12:46:24,326 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs', 'group': 'hadoop'} 2016-10-11 12:46:24,327 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755} 2016-10-11 12:46:24,328 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'} 2016-10-11 12:46:24,333 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'} 2016-10-11 12:46:24,338 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755} 2016-10-11 12:46:24,566 - call['ambari-python-wrap /usr/bin/hdp-select status spark-client'] {'timeout': 20} 2016-10-11 12:46:24,602 - call returned (0, 'spark-client - 2.5.0.0-1245') 2016-10-11 12:46:24,623 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.0.0-1245 2016-10-11 12:46:24,628 - Checking if need to create versioned conf dir /etc/hadoop/2.5.0.0-1245/0 2016-10-11 12:46:24,630 - call[('ambari-python-wrap', u'/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1} 2016-10-11 12:46:24,668 - call returned (1, '/etc/hadoop/2.5.0.0-1245/0 exist already', '') 2016-10-11 12:46:24,669 - checked_call[('ambari-python-wrap', u'/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False} 2016-10-11 12:46:24,707 - checked_call returned (0, '') 2016-10-11 12:46:24,708 - Ensuring that hadoop has the correct symlink structure 2016-10-11 12:46:24,709 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2016-10-11 12:46:24,711 - Directory['/var/log/zeppelin'] {'owner': 'zeppelin', 'group': 'zeppelin', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'} 2016-10-11 12:46:24,714 - XmlConfig['zeppelin-site.xml'] {'owner': 'zeppelin', 'group': 'zeppelin', 'conf_dir': '/etc/zeppelin/conf', 'configurations': ...} 2016-10-11 12:46:24,733 - Generating config: /etc/zeppelin/conf/zeppelin-site.xml 2016-10-11 12:46:24,733 - File['/etc/zeppelin/conf/zeppelin-site.xml'] {'owner': 'zeppelin', 'content': InlineTemplate(...), 'group': 'zeppelin', 'mode': None, 'encoding': 'UTF-8'} 2016-10-11 12:46:24,753 - File['/etc/zeppelin/conf/zeppelin-env.sh'] {'owner': 'zeppelin', 'content': InlineTemplate(...), 'group': 'zeppelin'} 2016-10-11 12:46:24,756 - File['/etc/zeppelin/conf/shiro.ini'] {'owner': 'zeppelin', 'content': InlineTemplate(...), 'group': 'zeppelin'} 2016-10-11 12:46:24,757 - File['/etc/zeppelin/conf/log4j.properties'] {'owner': 'zeppelin', 'content': ..., 'group': 'zeppelin'} 2016-10-11 12:46:24,758 - File['/etc/zeppelin/conf/hive-site.xml'] {'owner': 'zeppelin', 'content': StaticFile('/etc/spark/conf/hive-site.xml'), 'group': 'zeppelin'} 2016-10-11 12:46:24,759 - Execute[('chown', '-R', u'zeppelin:zeppelin', '/etc/zeppelin')] {'sudo': True} 2016-10-11 12:46:24,772 - HdfsResource['/user/zeppelin'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'default_fs': 'hdfs://agent1.localdomain:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'zeppelin', 'recursive_chown': True, 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'recursive_chmod': True} 2016-10-11 12:46:24,775 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://agent1.localdomain:50070/webhdfs/v1/user/zeppelin?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpJPFdBx 2>/tmp/tmpprTDW5''] {'logoutput': None, 'quiet': False} 2016-10-11 12:46:24,860 - call returned (0, '') 2016-10-11 12:46:24,863 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://agent1.localdomain:50070/webhdfs/v1/user/zeppelin?op=SETOWNER&user.name=hdfs&owner=zeppelin&group='"'"' 1>/tmp/tmpGLIKPK 2>/tmp/tmpdVBzy1''] {'logoutput': None, 'quiet': False} 2016-10-11 12:46:24,944 - call returned (0, '') 2016-10-11 12:46:24,946 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://agent1.localdomain:50070/webhdfs/v1/user/zeppelin?op=GETCONTENTSUMMARY&user.name=hdfs'"'"' 1>/tmp/tmpJJmniV 2>/tmp/tmpN0zANm''] {'logoutput': None, 'quiet': False} 2016-10-11 12:46:25,031 - call returned (0, '') 2016-10-11 12:46:25,033 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://agent1.localdomain:50070/webhdfs/v1/user/zeppelin?op=LISTSTATUS&user.name=hdfs'"'"' 1>/tmp/tmpyP_Tqe 2>/tmp/tmpoRV5Ek''] {'logoutput': None, 'quiet': False} 2016-10-11 12:46:25,117 - call returned (0, '') 2016-10-11 12:46:25,120 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://agent1.localdomain:50070/webhdfs/v1/user/zeppelin/.Trash?op=LISTSTATUS&user.name=hdfs'"'"' 1>/tmp/tmp0kpuuj 2>/tmp/tmpfWVoZc''] {'logoutput': None, 'quiet': False} 2016-10-11 12:46:25,201 - call returned (0, '') 2016-10-11 12:46:25,202 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://agent1.localdomain:50070/webhdfs/v1/user/zeppelin/.Trash/161011120000?op=LISTSTATUS&user.name=hdfs'"'"' 1>/tmp/tmpYIflKR 2>/tmp/tmpegjMwd''] {'logoutput': None, 'quiet': False} 2016-10-11 12:46:25,281 - call returned (0, '') 2016-10-11 12:46:25,283 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://agent1.localdomain:50070/webhdfs/v1/user/zeppelin/.Trash/161011120000/tmp?op=LISTSTATUS&user.name=hdfs'"'"' 1>/tmp/tmpRmjtT_ 2>/tmp/tmpNqu696''] {'logoutput': None, 'quiet': False} 2016-10-11 12:46:25,361 - call returned (0, '') 2016-10-11 12:46:25,364 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://agent1.localdomain:50070/webhdfs/v1/user/zeppelin/.sparkStaging?op=LISTSTATUS&user.name=hdfs'"'"' 1>/tmp/tmp7rMEJQ 2>/tmp/tmpQYHhWr''] {'logoutput': None, 'quiet': False} 2016-10-11 12:46:25,444 - call returned (0, '') 2016-10-11 12:46:25,445 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://agent1.localdomain:50070/webhdfs/v1/user/zeppelin/test?op=LISTSTATUS&user.name=hdfs'"'"' 1>/tmp/tmp6ptbpj 2>/tmp/tmpP4O3EZ''] {'logoutput': None, 'quiet': False} 2016-10-11 12:46:25,525 - call returned (0, '') 2016-10-11 12:46:25,527 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://agent1.localdomain:50070/webhdfs/v1/user/zeppelin/.Trash?op=SETOWNER&user.name=hdfs&owner=zeppelin&group='"'"' 1>/tmp/tmpgpQ8k4 2>/tmp/tmpDry8Gq''] {'logoutput': None, 'quiet': False} 2016-10-11 12:46:25,605 - call returned (0, '') 2016-10-11 12:46:25,607 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://agent1.localdomain:50070/webhdfs/v1/user/zeppelin/.Trash/161011120000?op=SETOWNER&user.name=hdfs&owner=zeppelin&group='"'"' 1>/tmp/tmpSjmaRR 2>/tmp/tmpEMPR56''] {'logoutput': None, 'quiet': False} 2016-10-11 12:46:25,697 - call returned (0, '') 2016-10-11 12:46:25,699 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://agent1.localdomain:50070/webhdfs/v1/user/zeppelin/.Trash/161011120000/tmp?op=SETOWNER&user.name=hdfs&owner=zeppelin&group='"'"' 1>/tmp/tmpILL0Z5 2>/tmp/tmp2hmJj8''] {'logoutput': None, 'quiet': False} 2016-10-11 12:46:25,788 - call returned (0, '') 2016-10-11 12:46:25,790 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://agent1.localdomain:50070/webhdfs/v1/user/zeppelin/.Trash/161011120000/tmp/GEM-GHEC-v1.txt?op=SETOWNER&user.name=hdfs&owner=zeppelin&group='"'"' 1>/tmp/tmpZAmoJk 2>/tmp/tmpecnJzE''] {'logoutput': None, 'quiet': False} 2016-10-11 12:46:25,876 - call returned (0, '') 2016-10-11 12:46:25,878 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://agent1.localdomain:50070/webhdfs/v1/user/zeppelin/.Trash/161011120000/tmp/isc-gem-cat.csv?op=SETOWNER&user.name=hdfs&owner=zeppelin&group='"'"' 1>/tmp/tmpKLrsxA 2>/tmp/tmp53NVsj''] {'logoutput': None, 'quiet': False} 2016-10-11 12:46:25,974 - call returned (0, '') 2016-10-11 12:46:25,975 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://agent1.localdomain:50070/webhdfs/v1/user/zeppelin/.sparkStaging?op=SETOWNER&user.name=hdfs&owner=zeppelin&group='"'"' 1>/tmp/tmpBsKeKA 2>/tmp/tmpSXpmFy''] {'logoutput': None, 'quiet': False} 2016-10-11 12:46:26,054 - call returned (0, '') 2016-10-11 12:46:26,057 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://agent1.localdomain:50070/webhdfs/v1/user/zeppelin/test?op=SETOWNER&user.name=hdfs&owner=zeppelin&group='"'"' 1>/tmp/tmpVHWnDB 2>/tmp/tmprclE1x''] {'logoutput': None, 'quiet': False} 2016-10-11 12:46:26,133 - call returned (0, '') 2016-10-11 12:46:26,135 - HdfsResource['/user/zeppelin/test'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'default_fs': 'hdfs://agent1.localdomain:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'zeppelin', 'recursive_chown': True, 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'recursive_chmod': True} 2016-10-11 12:46:26,136 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://agent1.localdomain:50070/webhdfs/v1/user/zeppelin/test?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpzdMOdH 2>/tmp/tmpB7g1_Z''] {'logoutput': None, 'quiet': False} 2016-10-11 12:46:26,197 - call returned (0, '') 2016-10-11 12:46:26,199 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://agent1.localdomain:50070/webhdfs/v1/user/zeppelin/test?op=SETOWNER&user.name=hdfs&owner=zeppelin&group='"'"' 1>/tmp/tmp8NwSGf 2>/tmp/tmptRfseG''] {'logoutput': None, 'quiet': False} 2016-10-11 12:46:26,277 - call returned (0, '') 2016-10-11 12:46:26,279 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://agent1.localdomain:50070/webhdfs/v1/user/zeppelin/test?op=GETCONTENTSUMMARY&user.name=hdfs'"'"' 1>/tmp/tmptV5127 2>/tmp/tmprkOUyK''] {'logoutput': None, 'quiet': False} 2016-10-11 12:46:26,370 - call returned (0, '') 2016-10-11 12:46:26,373 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://agent1.localdomain:50070/webhdfs/v1/user/zeppelin/test?op=LISTSTATUS&user.name=hdfs'"'"' 1>/tmp/tmpph8C_j 2>/tmp/tmpBRC8aT''] {'logoutput': None, 'quiet': False} 2016-10-11 12:46:26,452 - call returned (0, '') 2016-10-11 12:46:26,453 - HdfsResource['/apps/zeppelin'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'default_fs': 'hdfs://agent1.localdomain:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'zeppelin', 'recursive_chown': True, 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'recursive_chmod': True} 2016-10-11 12:46:26,454 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://agent1.localdomain:50070/webhdfs/v1/apps/zeppelin?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmplqGspH 2>/tmp/tmp8pn7JO''] {'logoutput': None, 'quiet': False} 2016-10-11 12:46:26,537 - call returned (0, '') 2016-10-11 12:46:26,539 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://agent1.localdomain:50070/webhdfs/v1/apps/zeppelin?op=SETOWNER&user.name=hdfs&owner=zeppelin&group='"'"' 1>/tmp/tmpqVHi2z 2>/tmp/tmppdLJCQ''] {'logoutput': None, 'quiet': False} 2016-10-11 12:46:26,618 - call returned (0, '') 2016-10-11 12:46:26,620 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://agent1.localdomain:50070/webhdfs/v1/apps/zeppelin?op=GETCONTENTSUMMARY&user.name=hdfs'"'"' 1>/tmp/tmpIwZvVR 2>/tmp/tmpe7Bms5''] {'logoutput': None, 'quiet': False} 2016-10-11 12:46:26,703 - call returned (0, '') 2016-10-11 12:46:26,705 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://agent1.localdomain:50070/webhdfs/v1/apps/zeppelin?op=LISTSTATUS&user.name=hdfs'"'"' 1>/tmp/tmpf80mOR 2>/tmp/tmp1XbrtN''] {'logoutput': None, 'quiet': False} 2016-10-11 12:46:26,783 - call returned (0, '') 2016-10-11 12:46:26,785 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://agent1.localdomain:50070/webhdfs/v1/apps/zeppelin/zeppelin-spark-dependencies-0.6.0.2.5.0.0-1245.jar?op=SETOWNER&user.name=hdfs&owner=zeppelin&group='"'"' 1>/tmp/tmpuj6PUP 2>/tmp/tmpeTV70m''] {'logoutput': None, 'quiet': False} 2016-10-11 12:46:26,869 - call returned (0, '') 2016-10-11 12:46:26,870 - HdfsResource['/apps/zeppelin/zeppelin-spark-dependencies-0.6.0.2.5.0.0-1245.jar'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'source': '/usr/hdp/current/zeppelin-server/interpreter/spark/dep/zeppelin-spark-dependencies-0.6.0.2.5.0.0-1245.jar', 'default_fs': 'hdfs://agent1.localdomain:8020', 'replace_existing_files': True, 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'zeppelin', 'group': 'zeppelin', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'file', 'action': ['create_on_execute'], 'mode': 0444} 2016-10-11 12:46:26,873 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://agent1.localdomain:50070/webhdfs/v1/apps/zeppelin/zeppelin-spark-dependencies-0.6.0.2.5.0.0-1245.jar?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmprls06G 2>/tmp/tmpfXTFHC''] {'logoutput': None, 'quiet': False} 2016-10-11 12:46:26,956 - call returned (0, '') 2016-10-11 12:46:26,957 - DFS file /apps/zeppelin/zeppelin-spark-dependencies-0.6.0.2.5.0.0-1245.jar is identical to /usr/hdp/current/zeppelin-server/interpreter/spark/dep/zeppelin-spark-dependencies-0.6.0.2.5.0.0-1245.jar, skipping the copying 2016-10-11 12:46:26,958 - HdfsResource[None] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'default_fs': 'hdfs://agent1.localdomain:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf'} 2016-10-11 12:46:26,965 - File['/etc/zeppelin/conf/interpreter.json'] {'content': ..., 'owner': 'zeppelin', 'group': 'zeppelin'} 2016-10-11 12:46:26,966 - Writing File['/etc/zeppelin/conf/interpreter.json'] because contents don't match 2016-10-11 12:46:26,966 - Execute['/usr/hdp/current/zeppelin-server/bin/zeppelin-daemon.sh restart >> /var/log/zeppelin/zeppelin-setup.log'] {'user': 'zeppelin'} Command failed after 1 tries
Created 10-12-2016 02:33 AM
Did a chmod o+r var/run
I'm now getting this error :
resource_management.core.exceptions.Fail: Execution of '/usr/hdp/current/zeppelin-server/bin/zeppelin-daemon.sh restart >> /var/log/zeppelin/zeppelin-setup.log' returned 1
Wich, googled, lead to this already open thread :
Created 12-05-2016 08:21 AM
Review /var/run/zeppelin/zepplin.pid value with you java process that runs zeppelin.
Created 12-05-2016 10:50 AM
Thanks for your reply.
The fact is i already search this way to solve the problem. After several tries, this behaviour of Zeppelin and the fact to have it finally running i have to go looking for the PID and sometimes modify some directory access rights lead, a day to another, to the zeppelin service not responding anymore, with a jetty 503 message wich can't be solved redeploying the jar file to erase what's designated by several forum thread as a configuration file corruption.
My next steps solving all these issues is not using the Ambari deployed Zeppelin at the moment but a hand made install of the lastest Zeppelin, allowing in addition to use Spark2.
Al this is really tricky and i start feeling Ambari is a good product only for people already used to install and service each of the hadoop stack component manually; this way it surely is a huge gain of time.
Regards.