Support Questions

Find answers, ask questions, and share your expertise

Connections to hiveserver hanging

avatar
Explorer

Hi Team,

We have enabled Atlas in our environment. After that we started getting connection hanging to Hive/HS2/Knox Gateway.

Hiveserver doesn't create new thread connections and stops responding. when we do restart of HS2 then the new jobs start but after sometime again connection goes in hung state.This issue was a result of recently configured atlas service on the cluster.

After that followed below link and changed the configuration.

https://community.hortonworks.com/content/supportkb/148579/failed-hive-internal-error-javautilconcur...

atlas.hook.hive.maxThreads=50 from 5.

atlas.hook.hive.minThreads=5.

Even i checked my user limit for hive it has been set:

root@hostname:~# su - hive
$ ulimit -a
time(seconds)        unlimited
file(blocks)         unlimited
data(kbytes)         unlimited
stack(kbytes)        8192
coredump(blocks)     0
memory(kbytes)       unlimited
locked memory(kbytes) 64
process              16000
nofiles              32000
vmemory(kbytes)      unlimited
locks                unlimited

Logs for the same:

2019-02-07 06:00:02,911 INFO  [HiveServer2-HttpHandler-Pool: Thread-346402]: log.PerfLogger (PerfLogger.java:PerfLogBegin(148)) - <PERFLOG method=PostHook.org.apache.atlas.hive.hook.HiveHook from=org.apache.hadoop.hive.ql.Driver>2019-02-07 06:00:02,911 ERROR [HiveServer2-HttpHandler-Pool: Thread-346402]: hook.HiveHook (HiveHook.java:run(213)) - Submitting to thread pool failed due to errorjava.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@64ee3a83 rejected from java.util.concurrent.ThreadPoolExecutor@3f70557d[Running, pool size = 5, active threads = 5, queued tasks = 1000, completed tasks = 1704]        at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)        at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)        at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)        at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112)        at org.apache.atlas.hive.hook.HiveHook.run(HiveHook.java:193)        at org.apache.atlas.hive.hook.HiveHook.run(HiveHook.java:52)        at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1599)        at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1289)

Could you please help me with this.

Thanks in Advance.

Regards,

Owez Mujawar

1 ACCEPTED SOLUTION

avatar
Explorer

Hi Team,


We have found the solution. it was not related to Atlas. Issue was user limit.

Recently increase in HDFS data causes more processing at jobs level. Current Hiveserver2 thread count was not sufficient to handle these increased amount of processing. This causes thread creation error for Hive service as shown below:

Caused by: java.lang.OutOfMemoryError: unable to create new native thread
       at java.lang.Thread.start0(Native Method)
       at java.lang.Thread.start(Thread.java:717)

After stopping Atlas service also we were getting same error.

To resolve this error, we have increased thread creation capacity of Hive service by increasing value of 'hive_user_nproc_limit' parameter for hive user.

Increased in HDFS data might be an coincidence with addition of Atlas service.


Regards,

Owez Mujawar

View solution in original post

1 REPLY 1

avatar
Explorer

Hi Team,


We have found the solution. it was not related to Atlas. Issue was user limit.

Recently increase in HDFS data causes more processing at jobs level. Current Hiveserver2 thread count was not sufficient to handle these increased amount of processing. This causes thread creation error for Hive service as shown below:

Caused by: java.lang.OutOfMemoryError: unable to create new native thread
       at java.lang.Thread.start0(Native Method)
       at java.lang.Thread.start(Thread.java:717)

After stopping Atlas service also we were getting same error.

To resolve this error, we have increased thread creation capacity of Hive service by increasing value of 'hive_user_nproc_limit' parameter for hive user.

Increased in HDFS data might be an coincidence with addition of Atlas service.


Regards,

Owez Mujawar