I have 1000 tables in my source RDBMS and I would like to get them migrated to hive using pyspark
I read through documentation and found that below two commands would help. Is there a way I can loop these two commands 1000 times if I have all the list of tables in a python array?
arr = ("table1","table2")
for x in arr:
df = spark.read.format("jdbc").blah.blah
If someone has a working solution for this could you please share. I tried but it is not throwing any error but at same time not writing anything.
@Meher I am happy to see that you resolved your issue. Would you mind sharing how you solved it in case someone else encounters the same situation?