Member since
09-29-2015
122
Posts
159
Kudos Received
26
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
6725 | 11-12-2016 12:32 AM | |
1926 | 10-05-2016 08:08 PM | |
2645 | 08-02-2016 11:29 PM | |
23362 | 06-24-2016 11:46 PM | |
2066 | 05-25-2016 11:12 PM |
05-19-2016
04:40 AM
1 Kudo
If you have 256 GB/node, leave out at-least 2 GB & 1 core for OS, more if there is something else running on the node. Then start with 5 cores/Executor & 30GB/Ex. So about 7 Executor/node.
... View more
05-18-2016
09:54 PM
2 Kudos
Please see Running Spark in Production session from Hadoop Summit, Dublin. See the section on perf tuning. Slides, video about executor selection
... View more
05-17-2016
06:46 PM
7 Kudos
You only need to setup Spark Thrift Server if you need to provide access to SparkSQL via JDBC or ODBC. If you want to only use SparkSQL either programmatically (submit a spark app with Spark-submit) or with Spark-Shell, you don't need Spark Thrift Server.
... View more
05-17-2016
06:44 PM
See https://github.com/vinayshukla/SparkDemo1 for an example for a Spark app with maven packaging & build against HDP. You can change the Spark version in the pom to the Spark version you want to use. You can submit the Spark App with spark-submit to run on your cluster.
... View more
04-26-2016
11:07 PM
Try restarting spark interpreter within Zeppelin, if that doesn't work restart zeppelin. zappelin-daemon.sh restart can do the Zeppelin restart.
... View more
04-23-2016
12:41 AM
Yes, with upcoming HDP 2.4.2 release the final Zeppelin TP will have Kerberos support for itself and when running with Livy.
... View more
04-23-2016
12:15 AM
3 Kudos
See https://github.com/databricks/spark-perf & https://github.com/intel-hadoop/HiBench for Spark related benchmark
... View more
04-23-2016
12:13 AM
2 Kudos
If you create a table via Spark/Beeline and you can see that table, but not the table that exist within Hive, that typically means Spark isn't configured to use Hive metastore. Pl see this and can you verify that there is a hive-site.xml under spark/conf and it pointing to right host, port corresponding to Hive Meta store.
... View more
04-22-2016
06:45 PM
In case you are looking for a Maven project to build Spark/Scala. Here is an example https://github.com/vinayshukla/SparkDemo1 Note it was for Spark 1.1.0 but you can change the version.
... View more
04-22-2016
06:37 PM
1 Kudo
Which Sandbox version are you running? What is the host memory where Sandbox is launched?
... View more