I was reading about "withBatchSize" parameter in apache storm at http://storm.apache.org/releases/1.1.1/storm-hive.html
It has explanation as "Max number of events written to Hive in a single Hive transaction". So below are my questions,
1. What is events? Is it one record in stream which would be ingested or any thing else.
2. How to verify that number for batch size is being written into hdfs