Support Questions

Find answers, ask questions, and share your expertise

input path on sandbox for loading data into spark shell

avatar
New Contributor

hi - i am trying to load my json file using spark and cannot seem to do it correctly. the path at the end of this bit of scala. the file is located on my sandbox in the tmp folder. i've tried:

val df2 = sqlContext.read.format("json").option("samplingRatio", "1.0").load("/tmp/rawpanda.json")

any help would be great thanks.

mark

1 ACCEPTED SOLUTION

avatar
Super Collaborator

The input path you put corresponds to the location in the HDFS "/tmp/rawpanda.json". If the file is actually sitting on your local filesystem, you should use the following instead "file:///tmp/rawpanda.json"

Also, one gotcha with reading JSON files using Spark is that the entire record needs to be on a single line (instead of the pretty exploded view) for the JSONreader to successfully parse the JSON record. You can test if the JSON record is being read correctly by running the following bit of code:

df2.show(1)

If there is something like _corrupt in the first column, then the records are most likely not formed correctly.

View solution in original post

2 REPLIES 2

avatar
Super Collaborator

The input path you put corresponds to the location in the HDFS "/tmp/rawpanda.json". If the file is actually sitting on your local filesystem, you should use the following instead "file:///tmp/rawpanda.json"

Also, one gotcha with reading JSON files using Spark is that the entire record needs to be on a single line (instead of the pretty exploded view) for the JSONreader to successfully parse the JSON record. You can test if the JSON record is being read correctly by running the following bit of code:

df2.show(1)

If there is something like _corrupt in the first column, then the records are most likely not formed correctly.

avatar
New Contributor

Ahhh. Originally I loaded on windows machine and was getting the "incorrect/corrupt format" error so i thought switching over to the sandbox would help. i didn't realize by default that it loads from hdfs! so now, i have edited the JSON document and re-tried loading it in the spark-shell in windows, it works 🙂

so u answered both questions i had! thank u sir.