I am writing a spark code in scala to do profiling works on structured data (hive table). An input to the code will be always a hive table. I am able to iterate on each column of input table (df.schema.foreach) but not clear on how to store the profiling result in below format in hive table within same iteration.
This table will have data like
Please remember that every time a new row should be populated in result table (i.e. for each column).
Thanks in advance. Please guide me on how would be the spark code structure?