Support Questions

Find answers, ask questions, and share your expertise

Spark connecting two hadoop clusters

avatar
Frequent Visitor

Hi

I want to source data from two hadoop clusters and join in Spark. Will it possible as shown below

//data from cluster1

val erorDF = spark.read.json("hdfs://master:8020//user/ubuntu/error.json")

erorDF.registerTempTable("erorDFTBL")

//data from cluster2

val erorDF2 = spark.read.json("hdfs://master2:8020//user/ubuntu/error.json")

erorDF2.registerTempTable("erorDFTBL2")

1 ACCEPTED SOLUTION

avatar
Not applicable

Sure! I just did it (with PySpark in Zeppelin, though):

For my test, I spun up two instances of HDP sandbox on Azure, and put a file into HDFS on each cluster. The code snippet reads each file and counts lines in each file individually, then concatenates the data sets and counts the lines of the union.

View solution in original post

6 REPLIES 6

avatar
Not applicable

Sure! I just did it (with PySpark in Zeppelin, though):

For my test, I spun up two instances of HDP sandbox on Azure, and put a file into HDFS on each cluster. The code snippet reads each file and counts lines in each file individually, then concatenates the data sets and counts the lines of the union.

avatar
New Member

Please let me know if it is possible to access the hive table present across multiple clusters (On Hortonworks on-premises cluster)

avatar
Frequent Visitor

Thank you Becker. Will there be any setup I need to do in Zeppelin. I am running my Zeppelin in cluster 1.

avatar
Not applicable

No additional setup required - the Spark libraries are automatically imported and the Spark context is provided implicitly by Zeppelin. For any additional dependencies that you project needs, use %dep - see the documentation in https://zeppelin.apache.org/docs/latest/interpreter/spark.html.

avatar
Frequent Visitor

Tested below in AWS. Looks good. Thank you

//read error JSON file from cluster 1

val erorDF = spark.read.json("hdfs://master:8020/user/ubuntu/error.json")

erorDF.registerTempTable("erorDFTBL")

//read file from cluster 2

val erorDF2 = spark.read.json("hdfs://master2:8020/user/ubuntu/errors")

erorDF2.registerTempTable("erorDFTBL2")

avatar
Community Manager

@Chandraprabu As this is an older post, we recommend starting a new thread. The new thread will provide the opportunity to provide details specific to your environment that could aid others in providing a more accurate answer to your question.


Cy Jervis, Manager, Community Program
Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.