Support Questions

Find answers, ask questions, and share your expertise
Celebrating as our community reaches 100,000 members! Thank you!

Zeppelin error on restart

Expert Contributor


I get the following error on restarting zeppelin through ambari on HDP2.5. Nothing has changed since the last time it was running.

Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/", line 330, in <module>
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/", line 280, in execute
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/", line 720, in restart
    self.start(env, upgrade_type=upgrade_type)
  File "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/", line 184, in start
  File "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/", line 234, in update_kerberos_properties
    config_data = self.get_interpreter_settings()
  File "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/", line 209, in get_interpreter_settings
    config_data = json.loads(config_content)
  File "/usr/lib/python2.7/json/", line 338, in loads
    return _default_decoder.decode(s)
  File "/usr/lib/python2.7/json/", line 366, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "/usr/lib/python2.7/json/", line 384, in raw_decode
    raise ValueError("No JSON object could be decoded")
ValueError: No JSON object could be decoded
2017-03-15 22:01:28,144 - call returned (0, '')
2017-03-15 22:01:28,145 - DFS file /apps/zeppelin/zeppelin-spark-dependencies- is identical to /usr/hdp/current/zeppelin-server/interpreter/spark/dep/zeppelin-spark-dependencies-, skipping the copying
2017-03-15 22:01:28,145 - HdfsResource[None] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'default_fs': 'hdfs://', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': '/usr/bin/kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf'}

Command failed after 1 tries


Expert Contributor

I found the problem to be with the interpreter.json file that had somehow become corrupted/empty.

View solution in original post


Expert Contributor

I may be teaching wrong religion and not helping you directly here: I went away from Zeppelin in Ambari and installed it on a client manually - not a biggie. Have full control over it now. Good luck.

Expert Contributor

I found the problem to be with the interpreter.json file that had somehow become corrupted/empty.


What was your solution to fixing the corrupted interpreter.json?

I ran into this same issue and was able to resolve it in the following manner:

My issue arose when my namenode (which is running Ambari and Zeppelin) ran out of diskspace. This started a chain reaction in Ambari where services started dropping due to the inability to write data (logs) into the local filesystem.

After freeing up some space in the local fs, the failed services started to become healthy in Ambari when the healthchecks returned successful statuses. Zeppelin then was the only one not working and restarting the services didn't go through -- the error message was the same as the original poster's:

ValueError:No JSON object could be decoded

To resolve this, I went to the /etc/zeppelin/conf directory, and noted that the interpreter.json file was 0 bytes. This file contains all the interpreter settings.

After renaming this file with the suffix .bkp I restarted the Zeppelin service in Ambari and the interpreter.json file was repopulated. The ownership of the file did not correspond with the others in the directory so I needed to chown the file with the appropriate ownership.

Note: I noted that after the interpreter.json is corrupted, and repopulated, any changes made prior to this are lost. So you will need to add them again in Zeppelin.

Also sometimes the notebook-authentication.json, which is in the same folder, might also become corrupted. This file however is not repopulated on service restart. It contains interpreter specific authentication information.