Member since
05-06-2014
14
Posts
3
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3334 | 07-08-2016 05:04 PM | |
4503 | 07-06-2016 02:42 AM | |
3380 | 07-04-2016 08:54 AM |
07-08-2016
05:04 PM
1 Kudo
The easiest way I know to get Spark working with Ipython and the Jupyter Notebook is by setting the following two environment variables as described in the book "Learning Spark": IPYTHON=1 IPYTHON_OPTS="notebook" Afterwards running ./bin/pyspark NB: it's possible to pass more Jupyter options using IPYTHON_OPTS; by googling a bit you'll find them.
... View more
07-06-2016
08:19 AM
It may be a bit of a long shot, but you could mount the directories of your remote server in your local server using samba and afterwards copy the files to hdfs from the command line.
... View more
07-06-2016
08:13 AM
I would advise to use ipython's internal debugger ipdb. This debugger allows you to run every statement step by step. * http://quant-econ.net/py/ipython.html#debugging * https://docs.python.org/3/library/pdb.html Finally regarding the other statements above when you using Anaconda's ipython remember to set the environment variable PYSPARK_PYTHON to the location of ipython (ex. /usr/bin/ipython) so PySpark knows where to find ipython. Good luck.
... View more
07-06-2016
02:42 AM
1 Kudo
(1) I would start by loading the SparkR package into RStudio so you can make use of it. See the following link under heading "Using SparkR from RStudio" https://github.com/apache/spark/tree/master/R. (2) Now you are ready to run through the following tutorial. However instead of reading the data from "hdfs" load it from your local file system. http://www.r-bloggers.com/a-first-look-at-spark/ (3) Study the SparkR Guide to gain more indepth knowledge. http://spark.apache.org/docs/latest/sparkr.html (4) Study Spark (Dataframes, RDDs, etc) for example with the Oreilly book "Learning Spark". I find that it always helps to understand how something works under the hood. The same holds for SparkR you can easily find some videos about Youtube to understand how it works under the hood, especially the distributed character of SparkR + Spark.
... View more
07-04-2016
08:54 AM
1 Kudo
If I may take a different approach on your problem I would use Spark to do the job. Load the data of each file into a separate Spark Data Frame add a new column with the desired value write everything back to HDFS preferably in a format such as Parquet and compressed with snappy.
... View more