Member since
09-16-2016
6
Posts
1
Kudos Received
0
Solutions
04-08-2020
03:16 AM
I recently ran into an issue with Zeppelin failing to restart all of a sudden. Upon further inspection, the log file never received any additional data as I tailed it. After looking in the log file for Ambari, I found that it was failing because of a missing interpreter.json file.
019-08-29 17:57:35,701 - get_user_call_output returned (0, u'{"RemoteException":{"exception":"FileNotFoundException","javaClassName":"java.io.FileNotFoundException","message":"File does not exist: /user/zeppelin/conf/interpreter.json"}}404', u'')
2019-08-29 17:57:35,703 - Creating new file /user/zeppelin/conf/interpreter.json in DFS
2019-08-29 17:57:35,704 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT --data-binary @/etc/zeppelin/conf/interpreter.json -H '"'"'Content-Type: application/octet-stream'"'"' --negotiate -u : '"'"'<a href="<a href="http://removedhostname.info:50070/webhdfs/v1/user/zeppelin/conf/interpreter.json?op=CREATE&overwrite=True" target="_blank">http://removedhostname.info:50070/webhdfs/v1/user/zeppelin/conf/interpreter.json?op=CREATE&overwrite=True</a>" target="_blank"><a href="http://removedhostname.info:50070/webhdfs/v1/user/zeppelin/conf/interpreter.json?op=CREATE&overwrite=True</a" target="_blank">http://removedhostname.info:50070/webhdfs/v1/user/zeppelin/conf/interpreter.json?op=CREATE&overwrite=True</a</a>>'"'"' 1>/tmp/tmp19YMAL 2>/tmp/tmp5ggwKg''] {'logoutput': None, 'quiet': False}
The file was in HDFS, upon inspection, it no longer existed. The file did exist on the host @ /usr/hdp/current/zeppelin-server/conf/interpretor.json which is a symbolic link to /etc/zeppelin/conf/interpreter.json:
2019-08-29 17:57:33,930 - Writing File['/etc/zeppelin/conf/interpreter.json'] because contents don't match
2019-08-29 17:57:34,068 - HdfsResource['/user/zeppelin/conf/interpreter.json'] {'security_enabled': True, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': 'HDFS', 'default_fs': 'hdfs://REMOVED', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'hdfs-REMOVED@REMOVEDREALM', 'user': 'hdfs', 'action': ['delete_on_execute'], 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'file'}
I quickly copied the files out to another location just to have a back up to try a few things. After spending about three hours working on it, I had a conversation with Jim Jones. He had seen a different but similar issue on another cluster. He mentioned a configuration variable that would be worth a shot.
zeppelin.interpreter.config.upgrade=false
I then took my backup copy and pushed it into HDFS. Restarted zeppelin and so far it is working OK. I am not 100% certain what caused the issue in Ambari to begin with at this point, but this may get you out of an issue when in a pinch.
Kudos to Jim Jones!
... View more
Labels:
12-21-2019
06:33 AM
1 Kudo
You need to find the parent that has the association and delete the references. Example: I have a hive table with a column that is SSN and it had two tags associated to it. Even after the association was removed it still contains the history. When you delete you get an error message. Given type {TagName} has references. In this case I know I have a hive_column that had this reference and I need to find the proper GUID in order to delete this reference and get this test setup out of my UI. First I do a search for these items above in the Atlas UI. Check the option for Show historical entities in the event yours has been deleted. The information is displayed in the UI and this is the one I am after. Next I will use the developer tools in chrome and generally I clear out any of the history to help reduce any confusion as to what I am looking for. Next click on ssn or your entity that was associated to a classification. The url is in the panel for the get request, and the GUID in the URL is what you are after. Delete the classification from the GUID. curl -iv -u tkreutzer \ -X DELETE http://yourhost:21000/api/atlas/v2/entity/guid/ec8a34c7-db67-41b8-a14c-32a19d2166bf/classification/SSNHR Now you can delete the Classification from the UI, barring there are no other associations. If so, rinse and repeat for each hive column. I will probably try to figure out a way to do this via REST with Python later in a way that finds all associated GUID's for us. Hope this help... Cheers
... View more
07-18-2019
04:36 PM
I did the following. I created a symlink for hbase-site.xml on each edge node as follows. ln -s /etc/hbase/conf/hbase-site.xml /etc/spark2/conf/hbase-site.xml I started spark shell with the following params. spark-shell --driver-memory 5g --jars /usr/hdp/2.6.5.0-292/shc/shc-core-1.1.0.2.6.5.0-292.jar,/usr/hdp/current/hbase-client/lib/hbase-common.jar,/usr/hdp/current/hbase-client/lib/hbase-client.jar,/usr/hdp/current/hbase-client/lib/hbase-server.jar,/usr/hdp/current/hbase-client/lib/hbase-protocol.jar,/usr/hdp/current/hbase-client/lib/guava-12.0.1.jar,/usr/hdp/current/hbase-client/lib/htrace-core-3.1.0-incubating.jar
... View more
07-17-2019
11:27 AM
Assuming you have the ODBC drivers installed the following should work. Another few notes, this is through Knox to LLAP. $s_ID = "youruser"
$s_PWD = "yourpassword"
$conn_string = "DRIVER={Hortonworks Hive ODBC Driver};ThriftTransport=2;SSL=1;Host=yourhost.com;Port=8443;Schema=yourdb;HiveServerType=2;AuthMech=3;HTTPPath=/gateway/default/llap;CAIssuedCertNamesMismatch=1;AllowSelfSignedServerCert=1;SSP_mapreduce.job.queuename=somequeue;SSP_tez.queue.name=somequeue;UseNativeQuery=1;UID=" + $s_ID + ";PWD=" + $s_PWD + ";"
$conn = New-Object System.Data.Odbc.OdbcConnection
$sql = "Show Tables"
$conn.ConnectionString = $conn_string
$conn.Open()
$cmd = New-Object System.Data.Odbc.OdbcCommand
$cmd.Connection = $conn
$cmd.CommandText = $sql
$execute = $cmd.ExecuteReader()
while ( $execute.read() )
{
$execute.GetValue(0)
}
... View more