Created 09-13-2016 07:46 PM
Hi
I want to source data from two hadoop clusters and join in Spark. Will it possible as shown below
//data from cluster1
val erorDF = spark.read.json("hdfs://master:8020//user/ubuntu/error.json")
erorDF.registerTempTable("erorDFTBL")
//data from cluster2
val erorDF2 = spark.read.json("hdfs://master2:8020//user/ubuntu/error.json")
erorDF2.registerTempTable("erorDFTBL2")
Created 09-14-2016 02:16 PM
Sure! I just did it (with PySpark in Zeppelin, though):
For my test, I spun up two instances of HDP sandbox on Azure, and put a file into HDFS on each cluster. The code snippet reads each file and counts lines in each file individually, then concatenates the data sets and counts the lines of the union.
Created 09-14-2016 02:16 PM
Sure! I just did it (with PySpark in Zeppelin, though):
For my test, I spun up two instances of HDP sandbox on Azure, and put a file into HDFS on each cluster. The code snippet reads each file and counts lines in each file individually, then concatenates the data sets and counts the lines of the union.
Created 06-29-2022 12:05 AM
Please let me know if it is possible to access the hive table present across multiple clusters (On Hortonworks on-premises cluster)
Created 09-14-2016 02:38 PM
Thank you Becker. Will there be any setup I need to do in Zeppelin. I am running my Zeppelin in cluster 1.
Created 09-14-2016 02:43 PM
No additional setup required - the Spark libraries are automatically imported and the Spark context is provided implicitly by Zeppelin. For any additional dependencies that you project needs, use %dep - see the documentation in https://zeppelin.apache.org/docs/latest/interpreter/spark.html.
Created 09-14-2016 03:36 PM
Tested below in AWS. Looks good. Thank you
//read error JSON file from cluster 1
val erorDF = spark.read.json("hdfs://master:8020/user/ubuntu/error.json")
erorDF.registerTempTable("erorDFTBL")
//read file from cluster 2
val erorDF2 = spark.read.json("hdfs://master2:8020/user/ubuntu/errors")
erorDF2.registerTempTable("erorDFTBL2")
Created 06-29-2022 04:54 AM
@Chandraprabu As this is an older post, we recommend starting a new thread. The new thread will provide the opportunity to provide details specific to your environment that could aid others in providing a more accurate answer to your question.