We are using the ODBC driver to iterate over every schema and table in the Hive store and processing the data retrieved. Since we are scanning Hive, obviously the schemas are massive. After processing the schema of the Hive store for about 9 hours, our application crashes, and according to the windows event log it is due to an error occurring in the Hortonworks ODBC Driver. At first I thought the error I was getting was due to running out of memory but after some research turns out that the error code is mislabeled. Due to the vagueness of the error explanation that I have seen throughout several different forum posts I am not sure what to do from here. Below I will put some of the Hortonworks Driver logs of the point where it crashes, along with the Windows event log. It would be great to be able to prevent the error, but worse case I am not sure how to catch the error, or resolve it, or what even is causing it.
As you say you are scanning Hive schemas, this must be putting a lot of load on Hive metastore.
The driver logs suggest there are issues establishing new connections to Metastore.
org.apache.hadoop.hive.ql.metadata.HiveException:java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient:33:1
You could review HS2 logs to see how many concurrent connections were there to the HMS at the time of the issue. Check if there is scope to increase max_connection value in your rdbms, or increase Metastore Heap size to accommodate more number of connections.