Created 05-26-2016 11:26 AM
getting this when hit hive
WARNING: Use "yarn jar" to launch YARN applications. Logging initialized using configuration in file:/etc/hive/2.3.4.7-4/0/hive-log4j.properties SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/hdp/2.3.4.7-4/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/hdp/2.3.4.7-4/hive/lib/avro-tools-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
After that long wait and no further progress/error/warning. No idea what need to be done
Created 05-26-2016 01:14 PM
@Tajinderpal Singh It looks like hive is not able get any container from Resource manager.
Check you Resource manager to see if you have any avaialble contaiers or all containers are occupied by some jobs
Created 05-26-2016 11:30 AM
Quick checks.
1. whether you have enough memory to run hive shell on that node.
free -m
2. how many hive shell sessions running?
ps -aef|grep hive
EDITED:
3. Are you running hive on tez? if yes then check if you have available resources on Resource manager UI.
Created 05-26-2016 06:57 PM
screen-shot-2016-05-26-at-75641-pm.png
Please check your resource manager UI for queue usage. see screenshot.
Also please don't kill any job until you confirm whether it is critical or some hang jobs.
Created 05-26-2016 01:14 PM
@Tajinderpal Singh It looks like hive is not able get any container from Resource manager.
Check you Resource manager to see if you have any avaialble contaiers or all containers are occupied by some jobs
Created 05-26-2016 05:22 PM
how to check what all jobs are running in my default queue for resource manager and how to flush them. I need to do this through command line
Created 05-26-2016 06:14 PM
You can do that via yarn application -list command.
This will give you list of all SUBMITTED, ACCEPTED or RUNNING applications.
From this you can filter applications of default queue by running yarn application -list | grep default
To kill the application you can run yarn application -kill <Application ID>
Created 01-10-2018 04:16 PM
I solved the same problem, and the root cause is like what @Pradeep Bhadani said.
Hive shell needs access to whichever Yarn container is running the Hive session process. and Yarn container could be running anywhere in the cluster(as long as that node has nodemanager). so make sure you have access to all nodes.
also check if the hive shell client box has DNS resolution on all hostnames, because that container node is returned as hostname not IP.
Created 05-27-2016 05:46 AM
Thanks Rahul the commands helped me to flush the default queue
Created 05-28-2016 09:45 AM
Hey guys i have ran into another issue. Now i am getting hive prompt and also running basic hive queries which did'nt execute MR job at backend. but when i ran query which execute MR job at backend it hang up with no further progress(no mapper/reducer progress).
I have checked REsource manager queue, it looks ok as the container is allocated to the query only.
Also i have checked my MapReduce2 is up and running.
can anybody suggest what needs to be done in this case?
Created 06-04-2016 09:56 PM
When you run a query that involves some or multiple joins, hive requires some settings to be modified inorder to have optimal map reduce working. try the below Advanced configs:
1. hive.exec.parallel= true
2.hive.auto.convert.join= false
3.hive.exec.compress.output= true