Member since
09-29-2015
155
Posts
205
Kudos Received
18
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
8533 | 02-17-2017 12:38 PM | |
1372 | 11-15-2016 03:56 PM | |
1918 | 11-11-2016 05:27 PM | |
15659 | 11-11-2016 12:16 AM | |
3144 | 11-10-2016 06:15 PM |
11-10-2015
03:39 PM
@Simon I dont believe there is apache friendly PMML evaluator available now. JPMML and Openscoring have GNU license . If you find one , please share...
... View more
11-10-2015
03:34 PM
@bplath@hortonworks.com cascading using jppml underneath the hood if i remember correctly. So u still hit the same licensing issue.
... View more
11-05-2015
05:03 AM
I am trying to use a mysql jdbc jar in zeppelin and cli and getting errors b%dep z.load("/var/lib/ambari-server/resources/mysql-connector-java-5.1.17.jar") val url="jdbc:mysql://localhost:3306/hive" val prop = new java.util.Properties prop.setProperty("user","root") prop.setProperty("password","****") val people = sqlContext.read.jdbc(url,"version",prop) But getting an exception : java.sql.SQLException: No suitable driver found for jdbc:mysql://localhost:3306/hive at java.sql.DriverManager.getConnection(DriverManager.java:596) at java.sql.DriverManager.getConnection(DriverManager.java:187) I tried doing this using the CLI by registering like in this blog: http://hortonworks.com/hadoop/zeppelin/#section_3 When you have a jar on the node where Zeppelin is running, the following approach can be useful: Add spark.files property at SPARK_HOME/conf/spark-defaults.conf; for example:spark.files /path/to/my.jar This is my spark-defaults.conf, that i modified using ambari spark.driver.extraJavaOptions -Dhdp.version=2.3.2.0-2950 spark.files /var/lib/ambari-server/resources/mysql-connector-java-5.1.17.jar spark.history.kerberos.keytab none When I run the same code I am getting the same error as above: java.sql.SQLException: No suitable driver found for jdbc:mysql://localhost:3306/hive The jar file does exist in that path:/var/lib/ambari-server/resources/mysql-connector-java-5.1.17.jar [root@sandbox conf]# find / -iname "mysql-connector-java*" /usr/hdp/2.3.2.0-2950/sqoop/lib/mysql-connector-java.jar /usr/hdp/2.3.2.0-2950/hive/lib/mysql-connector-java.jar /usr/hdp/2.3.2.0-2950/hbase/lib/mysql-connector-java.jar /usr/hdp/2.3.2.0-2950/knox/ext/mysql-connector-java.jar /usr/hdp/2.3.2.0-2950/hadoop/lib/mysql-connector-java.jar /usr/hdp/2.3.2.0-2950/hadoop-yarn/lib/mysql-connector-java.jar /usr/hdp/2.3.2.0-2950/ranger-admin/ews/lib/mysql-connector-java.jar /usr/share/java/mysql-connector-java-5.1.17.jar /usr/share/java/mysql-connector-java-5.1.31-bin.jar /usr/share/java/mysql-connector-java.jar /var/lib/ambari-server/resources/mysql-connector-java-5.1.17.jar /var/lib/ambari-agent/tmp/mysql-connector-java.jar /etc/maven/fragments/mysql-connector-java
... View more
Labels:
- Labels:
-
Apache Spark
-
Apache Zeppelin
11-05-2015
03:30 AM
Thats what I am doing now, I have sandbox 2.3.2 using out of the box ambari zeppelin pointing to 1.4 and setting up now local zeppelin to point to 1.5.1
... View more
11-05-2015
03:25 AM
After talking to @Ali Bajwa it is not possible to point to a different interpreter @ this point. As it is compiled to a specific version of spark. So you cant dynamically chose which spark version you want to use. I filed a JIRA for this feature: https://issues.apache.org/jira/browse/ZEPPELIN-392
... View more
11-05-2015
03:25 AM
@nshawa@hortonworks.com after talking to @Ali Bajwa it is not possible to point to a different interpreter @ this point. As it is compiled to a specific version of spark. So you cant dynamically chose which spark version you want to use. I filed a JIRA for this feature: https://issues.apache.org/jira/browse/ZEPPELIN-392
... View more
11-05-2015
01:51 AM
Is it possible to clone the zeppelin interpreter with all settings programmatically(REST?) or using UI(I did not see that as an option). I created a new one manually but wanted to check if there was a better way?. I want to have 2 or more interpreters setup pointing @ different versions of spark 1.4 and 1.5. But i can easily see this extending to other components.
... View more
Labels:
- Labels:
-
Apache Zeppelin
11-04-2015
08:15 PM
@Ali Bajwa @Simon Elliston Ball does that mean I have to stop my zeppelin daemon first to pick up the new 3rd party packages. I am getting this error now: Must be used before SparkInterpreter (%spark) initialized I thought when i created new notebook, I would get new context. But it looks global, am I missing something?
... View more
11-04-2015
07:12 PM
2 Kudos
Is there any differences to load new 3rd party packages using cli or zeppelin if I am using zeppelin as the notebook. 1)cli: spark-shell
--packages com.databricks:spark-csv_2.11:1.1.0 or using 2) zeppelin: // add artifact recursively except comma separated GroupID:ArtifactId list z.load("groupId:artifactId:version").exclude("groupId:artifactId,groupId:artifactId, ...")
... View more
Labels:
- Labels:
-
Apache Spark
-
Apache Zeppelin
11-04-2015
03:45 PM
@Ali Bajwa in my directory, i have a notebooks.zip. I assume i can delete it before sharing it out to client? and zip the /opt/incubator-zeppelin/notebook/ directory? [root@sandbox ~]# ll /opt/incubator-zeppelin/notebook/
2A94M5J1Z/ 2ANTDG878/ 2AS5TY6AQ/ 2B21B3AYC/ 2B61M8WDX/
2ANT56EHN/ 2APFTN3NY/ 2AZHT34CH/ 2B3QZGE6B/ notebooks.zip
... View more
- « Previous
- Next »