Hi All, I have faced some issues. We want to dedicate one of the Zeppelin node to a different agency. So, if they create any notebook, those are getting synced to every one after a restart because we are using the central storage system as HDFS. So, Zeppelin nodes are picking all notebooks and interpreter settings from the HDFS single path. To avoid this, we isolated the settings and configurations by changing Zeppelin to use the local storage, so each node will act independently. For the same, the following are helpful steps in case anyone is facing the same. Thanks.
Verify the procedures in a lower environment before executing the same in production.
If the settings are missing or incorrect, add or update them to the above values.
In the Zeppelin UI, check the Interpreters page for any duplicate entries, If any. duplicates exist:
Backup and then delete the interpreter.json file from HDFS (/user/zeppelin/conf/interpreter.json) and from the local filesystem on the Zeppelin server host (/var/lib/zeppelin/conf/interpreter.json).
Restart the Zeppelin service in Ambari.
In the Zeppelin UI, confirm that the duplicate entries no longer exist.
If any custom interpreter settings are missing, add them again from the Interpreters page.
Verify that your existing notebooks are still available.
This article is contributed by an external user. The steps may not be verified by Cloudera and may not be applicable for all use cases and are very specific to a particular distribution. Please follow with caution and at your own risk. If needed, raise a support case to get the confirmation.