Support Questions
Find answers, ask questions, and share your expertise

Hiveserver 2 exited unexpectedly when creating a Parquet table in hive


Hi Everyone,

I created an hive external table in Parquet file format and inserted data from another table which is in text format, everything worked fine, but suddenly the Hiveserver2 exits and I was unable to see any data, also when I looked into the logs no info available regarding this unexpected exit.

When we looked in Cloudera Manager, everything of hive became red(bad health) only info which it displayed was follows-------

2 bad: This role's process exited. This role 
is supposed to be started. This role encountered 1 unexpected exit(s) in 
the previous 5 minute(s). 
Then I restarted the hiveserver2 in CM to get things up again.

Now the issue is that I am scared to create any more parquet tables, which should not bring hive db down.... Need help to know the issue and to confirm whether this is the known issue when dealing with parquet fileformats or is something else ?



The only thing I can think of that would bring down HS2 is out of memory.  You can confirm by checking stdout or stderr of the HS2 process from Cloudera Manager which should mention the OOM error.  If so, you can try bumping up the memory.


Hope that helps





Can you please give me the path to find out the stderr and stdout logs for hive server2 on cloudera manager. Using CDH5.3.2


Click on Hive service, then HS2 instance, then 'process' tab to find HS2 process.  At the bottom there are some link to process's stdout/stderr/log.




I looked into the stderr and stdout logs but could not see anything related to any issue or error, right now hive server2 is working, as I restarted the service. But not getting why it unexpectedly exited, when creating parquet file format table. ?




It happened again, when I  was trying to insert data into a parquet file format table .

New Contributor

I have had similar issues. I moved away from Parquet and at first was successfully able to create an ORC table which was working for a while. But now even the ORC table does not seem to be working when I query it (the inserts complete, but any query through Hue causes HiveServer2 to crash.)


I noticed that when I use the command-line hive query tool it seems to work OK. For example, this query in Hue to Hive crashes the server: "SELECT * FROM CENSUS_FILE3_2010 LIMIT 100;" but running this at the command line on my cluster produces results with no problem: hive -e 'SELECT * FROM CENSUS_FILE3_2010 LIMIT 100'.


I have also noticed these messages in my hiveserver log immediately before it crashes, wondering if it has anything to do with this:


10:24:00.081 PMINFOorg.apache.hadoop.hive.ql.Driver
10:24:00.081 PMINFOorg.apache.hadoop.hive.ql.log.PerfLogger
<PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
10:24:00.081 PMINFOZooKeeperHiveLockManager
about to release lock for default/census_file3_2010
10:24:00.086 PMINFOZooKeeperHiveLockManager
about to release lock for default
10:24:00.089 PMINFOorg.apache.hadoop.hive.ql.log.PerfLogger
</PERFLOG method=releaseLocks start=1431123840081 end=1431123840089 duration=8 from=org.apache.hadoop.hive.ql.Driver>
10:24:00.089 PMINFOorg.apache.hadoop.hive.ql.log.PerfLogger
</PERFLOG start=1431123840063 end=1431123840089 duration=26 from=org.apache.hadoop.hive.ql.Driver>
10:24:00.406 PMINFOorg.apache.hadoop.conf.Configuration.deprecation
mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
10:24:00.421 PMINFOorg.apache.hadoop.hive.ql.log.PerfLogger
<PERFLOG method=OrcGetSplits>

New Contributor

So I figured out my previous issue. It was a security problem. I had created a table with one Hive user and attempted to query it with another Hive user. This cause HiveServer2 to crash, but with no useful error messages.


Also, dropping the table by either user did not clear the necessary tables from the Hive warehouse. I had to manually go do that as the Hive superuser. Can someone from Cloudera comment on why HiveServer2 crashes when encountering an HDFS permissions issue instead of returning a useful error message? I never received any error messages about permissions issues when it crashed, had to hunt this down myself.

New Contributor

Hi Magnum_pi/Jais,


   Is your issue resolved? I am facing the similar kind of issue,when ever hive query fails in Hue,Hiveserver2 getting crashed and restarting.

there is no proper error messages to track this, all i can see is  session time out in hive log and the below keeper error code in zookeeper log.


Ex: hive log:


 INFO org.apache.zookeeper.server.PrepRequestProcessor: Got user-level KeeperException when processing sessionid:


INFO org.apache.zookeeper.ClientCnxn: Client session timed out, have not heard from server in



INFO org.apache.zookeeper.ClientCnxn: EventThread shut down
INFO org.apache.zookeeper.ClientCnxn: Socket connection established to, initiating session
INFO org.apache.zookeeper.ClientCnxn: Socket connection established to , initiating session
INFO org.apache.zookeeper.ClientCnxn: Unable to reconnect to ZooKeeper service, session <ID> has expired, closing socket connection
INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server <host>. Will not attempt to authenticate using SASL (unknown error)
INFO org.apache.zookeeper.ClientCnxn: EventThread shut down
INFO org.apache.hive.service.server.HiveServer2: STARTUP_MSG:




Error:KeeperErrorCode = NodeExists for /hive_zookeeper_namespace_hive1 <date> INFO org.apache.zookeeper.server.PrepRequestProcessor





New Contributor

Hi Karty --


Please see my previous post -- in my case it was a security issue. I was creating a table in Hive as one user and querying it as another. Making sure I stuck with one (and the same) user for table creation and querying solved my problem.


Of course, this doesn't really solve the bigger issue of being able to create and query tables between multiple users and why the heck Hiveserver crashes when doing so. But I didn't have time to tackle that bigger issue and was actually hoping someone from Cloudera would respond on this issue. No luck so far.