Community Articles

Find and share helpful community-sourced technical articles.
Labels (2)
avatar

I recently ran into an issue with Zeppelin failing to restart all of a sudden. Upon further inspection, the log file never received any additional data as I tailed it. After looking in the log file for Ambari, I found that it was failing because of a missing interpreter.json file. 

 

019-08-29 17:57:35,701 - get_user_call_output returned (0, u'{"RemoteException":{"exception":"FileNotFoundException","javaClassName":"java.io.FileNotFoundException","message":"File does not exist: /user/zeppelin/conf/interpreter.json"}}404', u'')
2019-08-29 17:57:35,703 - Creating new file /user/zeppelin/conf/interpreter.json in DFS
2019-08-29 17:57:35,704 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT --data-binary @/etc/zeppelin/conf/interpreter.json -H '"'"'Content-Type: application/octet-stream'"'"' --negotiate -u : '"'"'<a href="<a href="http://removedhostname.info:50070/webhdfs/v1/user/zeppelin/conf/interpreter.json?op=CREATE&overwrite=True" target="_blank">http://removedhostname.info:50070/webhdfs/v1/user/zeppelin/conf/interpreter.json?op=CREATE&overwrite=True</a>" target="_blank"><a href="http://removedhostname.info:50070/webhdfs/v1/user/zeppelin/conf/interpreter.json?op=CREATE&overwrite=True</a" target="_blank">http://removedhostname.info:50070/webhdfs/v1/user/zeppelin/conf/interpreter.json?op=CREATE&overwrite=True</a</a>>'"'"' 1>/tmp/tmp19YMAL 2>/tmp/tmp5ggwKg''] {'logoutput': None, 'quiet': False}

 

 

The file was in HDFS, upon inspection, it no longer existed. The file did exist on the host @ /usr/hdp/current/zeppelin-server/conf/interpretor.json which is a symbolic link to /etc/zeppelin/conf/interpreter.json:

 

2019-08-29 17:57:33,930 - Writing File['/etc/zeppelin/conf/interpreter.json'] because contents don't match
2019-08-29 17:57:34,068 - HdfsResource['/user/zeppelin/conf/interpreter.json'] {'security_enabled': True, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': 'HDFS', 'default_fs': 'hdfs://REMOVED', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'hdfs-REMOVED@REMOVEDREALM', 'user': 'hdfs', 'action': ['delete_on_execute'], 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'file'}

 

 

I quickly copied the files out to another location just to have a back up to try a few things. After spending about three hours working on it, I had a conversation with Jim Jones. He had seen a different but similar issue on another cluster. He mentioned a configuration variable that would be worth a shot. 

zeppelin.interpreter.config.upgrade=false

I then took my backup copy and pushed it into HDFS. Restarted zeppelin and so far it is working OK. I am not 100% certain what caused the issue in Ambari to begin with at this point, but this may get you out of an issue when in a pinch. 

 

Kudos to Jim Jones!

822 Views
0 Kudos