Support Questions

Find answers, ask questions, and share your expertise

hive command shell not opening up

avatar
Contributor

getting this when hit hive

WARNING: Use "yarn jar" to launch YARN applications. Logging initialized using configuration in file:/etc/hive/2.3.4.7-4/0/hive-log4j.properties SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/hdp/2.3.4.7-4/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/hdp/2.3.4.7-4/hive/lib/avro-tools-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]

After that long wait and no further progress/error/warning. No idea what need to be done

1 ACCEPTED SOLUTION

avatar
Super Collaborator

@Tajinderpal Singh It looks like hive is not able get any container from Resource manager.

Check you Resource manager to see if you have any avaialble contaiers or all containers are occupied by some jobs

View solution in original post

9 REPLIES 9

avatar
Super Guru

@Tajinderpal Singh

Quick checks.

1. whether you have enough memory to run hive shell on that node.

free -m

2. how many hive shell sessions running?

ps -aef|grep hive

EDITED:

3. Are you running hive on tez? if yes then check if you have available resources on Resource manager UI.

avatar
Super Guru

screen-shot-2016-05-26-at-75641-pm.png

@Tajinderpal Singh

Please check your resource manager UI for queue usage. see screenshot.

Also please don't kill any job until you confirm whether it is critical or some hang jobs.

avatar
Super Collaborator

@Tajinderpal Singh It looks like hive is not able get any container from Resource manager.

Check you Resource manager to see if you have any avaialble contaiers or all containers are occupied by some jobs

avatar
Contributor

how to check what all jobs are running in my default queue for resource manager and how to flush them. I need to do this through command line

avatar

You can do that via yarn application -list command.

This will give you list of all SUBMITTED, ACCEPTED or RUNNING applications.

From this you can filter applications of default queue by running yarn application -list | grep default

To kill the application you can run yarn application -kill <Application ID>

avatar
Contributor

I solved the same problem, and the root cause is like what @Pradeep Bhadani said.

Hive shell needs access to whichever Yarn container is running the Hive session process. and Yarn container could be running anywhere in the cluster(as long as that node has nodemanager). so make sure you have access to all nodes.

also check if the hive shell client box has DNS resolution on all hostnames, because that container node is returned as hostname not IP.

avatar
Contributor

Thanks Rahul the commands helped me to flush the default queue

avatar
Contributor

Hey guys i have ran into another issue. Now i am getting hive prompt and also running basic hive queries which did'nt execute MR job at backend. but when i ran query which execute MR job at backend it hang up with no further progress(no mapper/reducer progress).

I have checked REsource manager queue, it looks ok as the container is allocated to the query only.

Also i have checked my MapReduce2 is up and running.

can anybody suggest what needs to be done in this case?

avatar
Expert Contributor

When you run a query that involves some or multiple joins, hive requires some settings to be modified inorder to have optimal map reduce working. try the below Advanced configs:

1. hive.exec.parallel= true

2.hive.auto.convert.join= false

3.hive.exec.compress.output= true